Platform/GFX/Gralloc: Difference between revisions

Line 184: Line 184:
The other benefit of this system is that most [http://en.wikipedia.org/wiki/Board_support_package BSP] vendors provide graphics profilers (e.g. Adreno Profiler from QCOM, PerfHUD ES from nVidia) which recognize the eglSwapBuffers calls as frame boundaries to collect frame-based GL information from the driver to help development and performance tuning.
The other benefit of this system is that most [http://en.wikipedia.org/wiki/Board_support_package BSP] vendors provide graphics profilers (e.g. Adreno Profiler from QCOM, PerfHUD ES from nVidia) which recognize the eglSwapBuffers calls as frame boundaries to collect frame-based GL information from the driver to help development and performance tuning.


SurfaceTexture is a unified buffer management mechanism on Android, which you can setup it to run in different mode: sync/async, single/multiple buffer by simple function call which make SurfaceTexture be used everywhere in Android: Codec/Camera configure it to have serveral buffer (based on hardware, 9 for camera preview in Unagi) run asynchronously, Layer rendering configure it to have 2~3 buffer (based on BSP) run synchronously. And since its flexbility, you can encode a OpenGL rendered surface on-the-fly by using SurfaceMediaSource (which implement ISurfaceTexture) which is the core of Google Miracast.
In Android 2, there were many buffer management systems. In Android 4, all of this is unified under SurfaceTexture. This is made possible by the great flexibility of SurfaceTexture:
* SurfaceTexture supports both synchronous and asynchronous modes.
* SurfaceTexture supports generic multi-buffering: it can have any number of buffers between 1 and 32.


SurfaceTexture is the server-side class. SurfaceTextureClient is the client-side class. Both store an array of GraphicBuffers, called mSlots. The GraphicBuffer objects are separate instances in SurfaceTexture and in SurfaceTextureClient, but the underlying gralloc buffer handles are the same. The mechanism here is the client side issues a SurfaceTextureClient::dequeueBuffer call to get a new buffer to paint to. If there are not already enough buffers, and the number of buffers is not over the limit (32), it sends an IPC message that results in a call to SurfaceTexture::dequeueBuffer which allocates the GraphicBuffer. And after buffer allocation, the client side will send an IPC message that results in a call to SurfaceTexture::requestBuffer to get the gralloc buffer handle serialized over IPC back to it, and construct a GraphicBuffer around this handle, and cache it into its own mSlots. The mSlots arrays on both sides mirror each other, so that the two sides can refer to GraphicBuffers by index.
Examples:
* The Codec/Camera code configures it to have several buffers (depending on hardware, for instance 9 buffers for camera preview on Unagi) and run asynchronously
* The Android compositor configures it to have 2--3 buffer (depending on [http://en.wikipedia.org/wiki/Board_support_package BSP]) and run synchronously.
* Google Miracast uses it to encode OpenGL-rendered surfaces on-the-fly.
 
Let us now describe how the client-server SurfaceTexture system allocates GraphicBuffer's, and how both the client and server sides keep track of these shared buffer handles.
 
Again, SurfaceTexture is the server-side class, while SurfaceTextureClient is the client-side class. Each of them stores an array of GraphicBuffers, which is called mSlots in both classes. The GraphicBuffer objects are separate instances in SurfaceTexture and in SurfaceTextureClient, but the underlying buffer handles are the same. The mechanism here is as follows. The client side issues a SurfaceTextureClient::dequeueBuffer call to get a new buffer to paint to. If there is not already a free buffer in mSlots, and the number of buffers is not above the limit (32), it sends an IPC message that results in a call to SurfaceTexture::dequeueBuffer which allocates a GraphicBuffer. After this transaction, still inside of SurfaceTextureClient::dequeueBuffer, another IPC message is sent, that results in a call to SurfaceTexture::requestBuffer to get the GraphicBuffer serialized over IPC back to it, using GraphicBuffer::flatten and GraphicBuffer::unflatten, and cache it into its own mSlots. The mSlots arrays on both sides mirror each other, so that the two sides can refer to GraphicBuffer's by index. This allows the client and server side to communicate with each other by passing only indices, without flattening/unflattening GraphicBuffers again and again.


When the client side calls ANativeWindow::queue to present the frame from eglSwapBuffers call, SurfaceTextureClient::queue is issued and send the index where the rendered buffer is to server side SurfaceTexture::queue. Which cause the index queued into SurfaceTexture::mQueue for rendering. SurfaceTexture::mQueue is a wait queue for frame that wants to be rendered. In sync mode, the frame are showed one after another, but may dropped in async mode.
When the client side calls ANativeWindow::queue to present the frame from eglSwapBuffers call, SurfaceTextureClient::queue is issued and send the index where the rendered buffer is to server side SurfaceTexture::queue. Which cause the index queued into SurfaceTexture::mQueue for rendering. SurfaceTexture::mQueue is a wait queue for frame that wants to be rendered. In sync mode, the frame are showed one after another, but may dropped in async mode.
Confirmed users
753

edits