Confirmed users
753
edits
Line 190: | Line 190: | ||
Examples: | Examples: | ||
* The Codec/Camera code configures it to have several buffers (depending on hardware, for instance 9 buffers for camera preview on Unagi) and run asynchronously | * The Codec/Camera code configures it to have several buffers (depending on hardware, for instance 9 buffers for camera preview on Unagi) and run asynchronously | ||
* | * SurfaceFlinger (the Android compositor) configures it to have 2--3 buffer (depending on [http://en.wikipedia.org/wiki/Board_support_package BSP]) and run synchronously. | ||
* Google Miracast uses it to encode OpenGL-rendered surfaces on-the-fly. | * Google Miracast uses it to encode OpenGL-rendered surfaces on-the-fly. | ||
Let us now describe how the client-server SurfaceTexture system allocates GraphicBuffer's, and how both the client and server sides keep track of these shared buffer handles. | Let us now describe how the client-server SurfaceTexture system allocates GraphicBuffer's, and how both the client and server sides keep track of these shared buffer handles. Again, SurfaceTexture is the server-side class, while SurfaceTextureClient is the client-side class. Each of them stores an array of GraphicBuffers, which is called mSlots in both classes. The GraphicBuffer objects are separate instances in SurfaceTexture and in SurfaceTextureClient, but the underlying buffer handles are the same. The mechanism here is as follows. The client side issues a SurfaceTextureClient::dequeueBuffer call to get a new buffer to paint to. If there is not already a free buffer in mSlots, and the number of buffers is under the limit (e.g. 3 for triple buffering), it sends an IPC message that results in a call to SurfaceTexture::dequeueBuffer which allocates a GraphicBuffer. After this transaction, still inside of SurfaceTextureClient::dequeueBuffer, another IPC message is sent, that results in a call to SurfaceTexture::requestBuffer to get the GraphicBuffer serialized over IPC back to it, using GraphicBuffer::flatten and GraphicBuffer::unflatten, and cache it into its own mSlots. The mSlots arrays on both sides mirror each other, so that the two sides can refer to GraphicBuffer's by index. This allows the client and server side to communicate with each other by passing only indices, without flattening/unflattening GraphicBuffers again and again. | ||
Let us now describe what happens in SurfaceTextureClient::dequeueBuffer when there is no free buffer and the number of buffers has already met the limit (e.g. 3 for triple buffering). In this case, the server side (SurfaceTexture::dequeueBuffer) will wait for a buffer to be queued, and the client side waits for that, so that SurfaceTextureClient::dequeueBuffer will not return until a buffer has actually been queued on the server side. This is what allows SurfaceTexture to support both synchronous and asynchronous modes with the same API. | |||
Let us now explain how synchronous mode works. In synchronous mode, on the client side, inside of eglSwapBuffers, when ANativeWindow::queueBuffer is called to present the frame, it sends the index of the rendered buffer to the server side. This causes the index to be queued into SurfaceTexture::mQueue for rendering. SurfaceTexture::mQueue is a wait queue for frames that want to be rendered. In synchronous mode, all the frames are shown one after another, whereas in asynchronous mode frames may be dropped. | |||
Let us now explain how SurfaceFlinger (the Android compositor) uses this system to get images onto the screen. Each time SurfaceTexture::queueBuffer is called, it causes SurfaceFlinger to start the next iteration of the render loop. In each iteration, SurfaceFlinger calls SurfaceTexture::updateTexImage to dequeue a frame from SurfaceTexture::mQueue and bind that GraphicBuffer into a texture, just like we do in GrallocTextureHostOGL::Lock. | |||
The magic part is that SurfaceFlinger does not need to do the equivalent of GrallocTextureHostOGL::Unlock. In our case, we have a separate OpenGL texture object for each TextureHost, which typically (at least in the case of a ContentHost) represent one buffer each (so a double-buffered ContentHost has two TextureHost's). So we have to unbind the GraphicBuffer from the OpenGL texture before we can hand it back to the content side --- otherwise it would remain locked for read and couldn't be locked for write for content drawing. By contrast, SurfaceFlinger does not need to worry about this because it uses only one OpenGL texture, so that when it binds a new GraphicBuffer to it for compositing, that automatically unbinds the previous one! |