Gecko:AcceleratedFilters

From MozillaWiki
Jump to navigation Jump to search

Accelerated SVG And CSS Filters

Requirements

Spec: https://dvcs.w3.org/hg/FXTF/raw-file/default/filters/index.html

There are three kinds of filters we need to support:

  • CSS filter shorthands (e.g. opacity(), grayscale())
  • SVG filters
  • Custom filters (vertex and fragment shaders defined in GLSL)

CSS filter shorthands are defined by mapping into SVG filters and custom filters in a fairly straightforward way, so we can limit our attention to the latter two if we want to.

There are a few places we will want to use filters (sooner or later):

  • CSS 'filter' property applied to an element, which makes the element into a stacking context and filters the element and its descendants as a group. In general this requires that we support filters in the OMTC compositor *and* in non-compositor content-drawing contexts such as printing, drawWindow and -moz-element().
  • Some yet-to-be-proposed 'globalFilter' 2D canvas attribute, which applies the filter when rendering canvas primitives.
  • The 'filter()' CSS image value, which applies a filter to an image wherever a CSS image value can be used (e.g. background-image).


We currently support SVG filters on all platforms, and we shouldn't regress that. It should be OK to limit support for GLSL custom filters to platforms where we support WebGL.

Obviously we want to use the GPU for filter processing wherever possible, especially in situations where we don't need the results of filter processing to be read by the CPU.

One non-obvious requirement is that we want to minimize the dependence of filter processing time on the pixel values being filtered, to protect against timing-based data extraction attacks.

Implications

We will want some kind of Layers API for attaching filters to layers. This is essential for OMTC to work with filters.

We will want some kind of Moz2D API for 2D drawing with filters. This is needed for canvas 'globalFilter' to be efficient, and will be very helpful for supporting filters in BasicCompositor (which implements OMTC using Moz2D).

These APIs should both consume the same kind of internal representation of filters, so we can share code for mapping CSS filters, SVG filters and custom filters into our internal representation. This representation needs to be IPDL-compatible so we can easily ship it as part of a layer transaction.

Filter Objects

We could try to map all filters down to GLSL and support only GLSL custom filters (using ANGLE on Windows). This would make it difficult to have a performant CPU implementation of common filters. We could pattern-match particular GLSL filters in a CPU implementation (and reject the rest entirely, probably), but that could be more fragile than having an actual API that captures which filters are supported by CPU implementations. Also, because a single filter could contain a sequence of (vertex shader, fragment shader) custom filter primitives, and an SVG filter can be a DAG of filter primitives, we can't boil a filter down to a single (vertex shader, fragment shader) combination anyway.

Proposal: make a filter object a DAG of filter primitives, where each filter primitive is one of the SVG filter primitives or a GLSL custom filter

Many filter primitives take parameters. There are a few things to watch out for:

  • The common animation infrastructure shared by the compositor and style system needs to be extended to support interpolation between filter objects, with their parameters.
  • When a CSS transition or animation applies to parameters of filter primitives in an SVG <filter> element (not yet supported in Gecko, but will have to be eventually since the SVG and CSS WGs have resolved to promote some of those parameters from attributes to CSS properties), it can affect the rendering of the filtered content but there is no easy way to express this to the OMTC compositor so it can do OMTA of those parameters. Proposal: ignore this issue for now and not support OMTA of those parameters
  • Some primitives (<feImage>, custom filters) take parameters that are images. These will have to be external to the filter object and supplied separately when the filter object is used.
    • Proposal: When using a filter object with Moz2D, pass in a list of Moz2D SourceSurfaces to use as the image parameters
    • Proposal: When attaching a filter object to a Layer, attach a list of special child Layers to use as the image parameters (similar to the way we attach a special child Layer to mask with)

Implementation

There are lots of ways we could organize this.

Perhaps the simplest approach meeting all our needs would be:

  • A GLContext implementation that takes a filter object, a GL object for the source data, GL objects for the image parameters, and composites the result of the filter into another GL object.
  • A CPU implementation that takes a filter object, a DataSourceSurface for the source data, DataSourceSurfaces for the image parameters, and computes the result of the filter into a new DataSourceSurface.
    • Many filter primitives would benefit from a SIMD implementation.

Each Moz2D backend would select either the GLContext implementation or the CPU implementation. Moz2D-D2D would need to use an ANGLE GLContext and wrap SourceSurfaceD2D and DrawTargetD2D objects into GL objects via some ANGLE hack.

The BasicCompositor layer compositor would call down into the Moz2D API. Other layer compositors would use the GLContext implementation. D3D-based compositors would need to use ANGLE.

Unaddressed Issues

Filters can reference the page contents behind a filtered element using BackgroundImage/BackgroundAlpha. Supporting this is somewhat orthogonal to the above concerns.