Gecko:2DGraphicsSketch

From MozillaWiki
Jump to: navigation, search

The ideas I think are actually important are emphasized ... the rest of the details I care less about.

The ideas here are based on the APIs for cairo surface backends, plus the canvas 2D API (with the goal of mapping canvas 2D API directly onto the new API for maximum efficiency), plus general observations about where we need to reduce overhead, plus some thought about how to map efficiently onto CoreGraphics's stateful API and cairo's surface API. All kinds of input needed.

SourceBuffers

Abstraction of a "source image" that can be drawn anywhere. Refcounted and heap-allocated. The image data is immutable --- it's a snapshot. Any SourceBuffer can be used with any DrawTarget backend; readback will be performed if necessary.

enum BufferType { DATA, D2D_SURFACE, CAIRO_SURFACE, ... };
class SourceBuffer {
public:
  BufferType Type() { return mBufferType; }
  IntSize GetSize();
  /**
   * Return a DataSourceBuffer reference which holds the data in main memory. This API
   * provides readback.
   */
  already_AddRefed<DataSourceBuffer> GetData();
private:
  BufferType mBufferType;
};

XXX Need to add API so that backend-specific data can be cached here.

Bas: In my opinion the backend classes are just a subclass and the buffer type is inferred from that. It seems like a much better solution and we can create these via a create function of a DrawTarget (which will create a buffer optimized for that draw target, much like cairo create similar and D2D CreateBitmap).

DataSourceBuffers

Like a cairo_image_surface or gfxImageSurface. Provides a way to interface system-memory pixmaps.

enum DataFormat { A8, RGB24, ARGB32 };
class DataSourceBuffer : public SourceBuffer {
public:
  DataFormat GetFormat();
  unsigned char* GetData();
  uint32_t GetStride();

  // These APIs take ownership of the data.
  static already_AddRefed<DataSourceBuffer> CreateForRGBA32(IntSize aSize, unsigned char* aData, uint32_t aStride);
  static already_AddRefed<DataSourceBuffer> CreateForRGB24(IntSize aSize, unsigned char* aData, uint32_t aStride);
  static already_AddRefed<DataSourceBuffer> CreateForA8(IntSize aSize, unsigned char* aData, uint32_t aStride);
};

Backend

A lot of our objects are wrappers around "native" objects for a given backend. We can't expect wrappers for different backends to work interchangeably, so apart from SourceBuffers I suggest requiring that all the objects working together all have the same backend. We should provide API to ensure that though, via a backend-type enumeration.

// Labels the type of backend associated with mPrivateFlags and mPrivate data
enum BackendType { NONE, ... };

Paths

The goal is for C++ Path objects to be lightweight wrappers around some underlying heap-allocated native object reference (e.g. a CGPathRef or a ID2D1Geometry). We want to avoid having to make extra heap allocations just to allocate a wrapper for some underlying object that's just an opaque handle. So, these Paths aren't refcounted and don't need to heap-allocated; these are regular C++ values. Since we need some backend-specific code methods, we initialize and copy Path objects using placement new to override their classes. (Thus, subclasses aren't allowed add fields.) Direct assignment between Path objects is allowed. The base class can be instantiated, so we can assign other paths to it, but its methods don't work.

(A design that doesn't use the placement new trick would be to have a hand-made vtable pointer --- a pointer to a struct of function pointers --- in the Path object that gets overwritten as necessary. But this seems clumsier than using C++ dispatch.)

Paths are immutable once created (except you can overwrite one Path with another). In particular you can't add to a path after creating and using it; there's typically no need for that. You can however overwrite one Path value with another. Keeping Paths immutable should make serialization and caching (e.g. tesselation caching) simpler.

Paths (conceptually) keep all their coordinates in user space because that's what we want when we retain paths, e.g. for SVG shapes. The cairo/canvas-2D feature where path points are transformed by the current matrix as they're added can and should be emulated at the nsCanvasRenderingContext2D layer. (Basically we'd emit the path in user space coordinates, but if we detect a matrix change we'd convert the current path to device space coordinates and maintain it in device space thereafter.)

Bas: I think this is generally not the best idea. Cairo's model of converting at drawing time avoids a lot of transformation in the general case. I believe a wiser model would be like Direct2D does, where a path object exposes a transform method, which writes a transformed version of the path into another 'pathbuilder' (see my comments below). Backends can then internally optimize this process (i.e. simply transform an internal triangulation which was kept with the path) that makes retaining a good thing. But at the same time in the common case where a path is retained but not transformed differently the triangulation for a path can easily be re-used every time it's drawn, without having to figure out if the transform changed etc.

Paths can only be used with a DrawTarget using the same backend type.

class Path {
public:
  Path(const Path& aPath) { aPath.CopyTo(this); }
  BackendType Backend() { return mBackend; }
  virtual ~Path() {}

  // CopyTo uses placement new to set aDest to the right implementation class.
  // Note that if the underlying mPrivatePtr data is refcounted, copying
  // it is just an addref.
  virtual void CopyTo(Path* aDest) const { assert(false); }
  Path& operator=(const Path& aPath) { aPath.CopyTo(this); return *this; }

  virtual bool ContainsPoint(Point aPt) { assert(false); }

protected:
  BackendType mBackend : 8;
  uint32_t mPrivateFlags : 16;
  void* mPrivatePtr;
};

We want to be able to construct paths efficiently by emitting MoveTo/CurveTo calls and having those feed into a native path or path-builder object. We could use a dedicated PathBuilder object for this but we never need to build more than one path at a time so it's simpler to fold the PathBuilder into the DrawTarget and expose path-building directly on the DrawTarget (see below).

Folding into DrawTarget enables optimizations for backends like CoreGraphics where we can emit the path into the underlying context and the final Path object can temporarily point back to the context and say "I'm the current path in that context", thus avoiding having to copy the path out of the context if we don't start building a new path until the Path wrapper is destroyed.

Bas I absolutely disagree here. I think this is a very bad idea, we should do something like Direct2D does with GeometrySinks (much like your pathbuilder suggestion). It makes for a better API and allows easier asynchronous path creation and such things going forward. I agree it allows a limited optimization for coregraphics but I think it's bad practice to make a poor API decision in order to adapt to another poor API choice.

In my code that I'm working on for Tessellation I've got an API 'kind of' like what I'd want to do (http://hg.mozilla.org/users/bschouten_mozilla.com/2drender/file/115d4e1f71cb/core/Path.h). I'd also expose a SetTransform on the pathbuilder, so that transformation of the points can occur while the path is written easily, without the need to do another O(n) pass to perform transformation.

I'm also not sure I agree on the general light weight stack based wrapper ideas. Although I do agree that it's nice to avoid a heap allocation. On the other hand heap allocations are pretty cheap (at least on windows, I've never tried on other platforms but I'm assuming its the same), and if you look at how many of them for example the STL classes generally execute internally with excellent performance characteristics, I doubt it's going to matter much since we won't be creating -that- many paths. Having these refcounted also gives the advantage of easily allowing us to move ownership between threads when we work on backends that implement concurrency for certain operations. Without having to use obscure methods to link these path wrappers to actual path objects.

The bottom line is except for some small modifications (for example allowing the transform to be set on the pathbuilder to avoid a needless iteration when transforming a set of coordinates in user space, while at the same time avoiding the need for the API user to 'pre-transform'), I think the ID2D1Geometry model is much more solid than the one proposed here.

Matt I believe on Mac we wanted to look at using GL for content rendering in the future. Regardless of if this is using cairo-gl, or our own implementation, basing our API decisions on what Quartz does seems unnecessary.

Clips

Bas: I don't think we need separate clip objects at all. A general path should suffice, I do agree rectangular clips are a common case. So perhaps much like D2D we should allow push/popping rectangular clips as well (again like D2D, see my comments in the DrawTarget section).

Logically a Clip is the intersection of zero or more device-space paths, but we should special-case rectangular Clips since they're very common.

We treat Clips in a similar way to Paths: lightweight, not-heap-allocated wrappers around platform object pointers. Clips are immutable once created. (You can however overwrite one Clip value with another.)

Clips are not refcounted. They are regular C++ values (can be stack-allocated). As for Paths, they are replaced with backend-specific subclasses via placement new, so subclasses can't add fields. Direct assignment between Clip objects is allowed.

Clips can only be used with a DrawTarget using the same backend type.

class Clip {
public:
  Clip(const Clip& aClip) { aClip.CopyTo(this); }
  BackendType Backend() { return mBackend; }
  virtual ~Clip() {}

  // These methods are non-pure-virtual just so we can instantiate an empty
  // Clip object to be overwritten later.
  virtual void CombineWith(const Matrix& aCTM, const Rect& aRect, Clip* aDest) const { assert(false); }
  // Path must have matching backend type
  virtual void CombineWith(const Matrix& aCTM, const Path& aPath, Clip* aDest) const { assert(false); }
  virtual Rect GetExtents() { assert(false); }
 
  // CopyTo uses placement new to set aDest to the right implementation class.
  // Note that if the underlying mPrivatePtr data is refcounted, copying
  // it is just an addref.
  virtual void CopyTo(Clip* aClip) const { assert(false); }
  Clip& operator=(const Clip& aClip) { aClip.CopyTo(this); return *this; }

protected:
  // Rectangular clips can just store a rect here and use null mPrivate.
  Rect mRect;
  BackendType mBackend : 8;
  uint32_t mPrivateFlags : 24;
  // Represents intersection of clip paths; can be null if this is just a rect.
  // The overall clip area is the intersection of the paths, intersected with mRect
  void* mPrivate;
};

Patterns

The goal here is to be able to quickly stack-allocate patterns which are colors and SourceBuffers, but also to be able to cache backend-specific objects for the pattern. So these are not refcounted, are regular C++ values, and can be stack-allocated. But Pattern subclass objects can be different sizes so direct assignment between different kinds of patterns is not allowed. We do not use the placement new trick to change a Pattern to an implementation class.

Bas: Hrm, I'm not sure we need to quickly stack allocate these. I think it's better to make the objects lightweight and just use 'real' pattern objects since particular for our own hardware backend a lot will go in here. For colors we should probably just cache a bunch and make them easily accessible (much like GDI stock objects), or perhaps even include a full 'maskcolor' in the CompositionOp (we could then blend 'alpha' into that color), and treat NULL as a fully white source. Specifying no pattern with a none-white mask color would then automatically behave as a 'color' pattern, and this would for example map fairly well on what shaders do internally. Just thinking out loud here though. We also probably want radial/linear gradient patterns separately, and maybe factor out gradient stops like D2D does, although I'm not sure there's much value in the latter.

Patterns can only be used with a DrawTarget using the same backend type.

enum PatternType { COLOR, BUFFER, GRADIENT };

class Pattern {
public:
  virtual ~Pattern() {}
  PatternType Type() { return mType; }
  BackendType Backend() { return mBackend; }

  typedef (* PrivateDestructor)(Pattern* aPattern);

protected:
  Pattern(PatternType aType) : mType(aType), mPrivate(NULL) {}

  PatternType mType : 8;
  BackendType mBackend : 8;
  uint32_t mPrivateFlags : 16;
  void* mPrivate;
  PrivateDestructor mPrivateDestructor;
};

Gradient patterns use the Pattern base class; they have to be created using a DrawTarget, see below.

ColorPattern

class ColorPattern : public Pattern {
public:
  ColorPattern(const Color& aColor) : Pattern(COLOR), mColor(aColor) {}

protected:
  Color mColor;
};

BufferPattern

enum SampleFilter { NEAREST, BILINEAR, ... };

class BufferPattern : public Pattern {
public:
  // Don't tile by default
  BufferPattern(SourceBuffer* aBuffer) : Pattern(BUFFER), mBuffer(aBuffer),
      mSubimage(Point(0, 0), aBuffer->GetSize()), mFilter(BILINEAR) {}

protected:
  RefPtr<SourceBuffer> mBuffer;
  // Specifies subimage in tile space which should be drawn; clamp sampling to this rectangle
  // x/y of INT_MIN or width/height of INT_MAX means don't clamp in that direction.
  // 0,0,aBuffer->GetSize() behaves as EXTEND_PAD, MIN,MIN,MAX,MAX behaves as
  // EXTEND_REPEAT.
  IntRect mSubimage;
  // User-space-to-image-space transform
  Matrix mTransform;
  // Filter to use when resampling
  SampleFilter mFilter;
};

I expect the subimage rect to be controversial. Values other than 0,0,aBuffer->Size() and MIN,MIN,MAX,MAX may be difficult to implement. However, it's what layout needs, and backends can probably do better than the temporary surface workarounds that imglib currently uses!

Options

Pack common drawing flags and options into a single small struct to avoid parameter bloat.

enum CompositionOp { OVER, SOURCE, CLEAR, ... };
enum FillRule { WINDING, EVEN_ODD };
enum AntialiasMode { NONE, GRAY, SUBPIXEL };
enum Snapping { NO_SNAP, SNAP };

struct Options {
  Options() : mCompositionOp(OVER), mFillRule(WINDING),
              mAntialiasMode(GRAY), mSnapping(NO_SNAP) {}

private:
  CompositionOp mCompositionOp : 8;
  FillRule mFillRule : 1;
  AntialiasMode mAntialiasMode : 2;
  Snapping mSnapping : 1;
};

StrokeStyle and FillStyle

Stroking and filling are just two ways to turn a path into a mask into which the source pattern is drawn. StrokeStyle and FillStyle control that process. Therefore blurring and opacity belong here, because they are additional effects on the the mask obtained from the path.

I suggest having first-class support for blur and opacity here because canvas needs them (blur for shadows, opacity for global alpha) and many cases can be implemented more efficiently than creating a temporary mask and/or rendering through a temporary surface, or munging the source pattern, as we do now. Blur will also be useful for CSS box-shadow/text-shadow.

struct StrokeStyle {
  float mLineWidth;  
  float mMiterLimit;
  LineCap mLineCap;
  LineJoin mLineJoin;
  float* mDashes;
  uint32_t mDashCount;
  float mDashOffset;
  // If nonzero, blurs the stroke mask before filling it. Units are user-space.
  float mBlur;
  // Multiplies the opacity of the stroke mask by mAlpha before filling it.
  float mAlpha;
};
struct FillStyle {
  // If nonzero, blurs the fill mask before filling it. Units are user-space.
  float mBlur;
  // Multiplies the opacity of the fill mask by mAlpha before filling it.
  float mAlpha;
};

Bas: I think mAlpha should probably exist on another level. We most likely want to be able specify alpha on any operation (for example probably also on DrawSurface, which I think should have a dedicated one rather than a rectangular will with a pattern), so it could go in compositionOp. I need to think about what I think of the blur, we also may or may not want different startcap/endcaps/dashcaps like D2D offers, but I'm not sure of that. We probably want it for dashes so we can make both circular 'dots' and rectangular 'bars'.

Text

My basic idea for text is to use an API like cairo_show_glyphs, but instead of passing a simple glyph array, extract the guts of gfxTextRun --- the text and compact glyph storage --- into a GlyphBuffer abstraction and pass that in. That'll avoid a conversion through the glyph array. Because the text is also available, we can easily call cairo_surface_show_text_glyphs so for example PDF gets the actual text as well as the glyphs.

/**
 * Like cairo_scaled_font_t.
 */
class ScaledFont {
};

/**
 * The guts of gfxTextRun. Contains a Unicode string
 * and positioned glyphs for the string. Given a character
 * range, you can extract the text in the range, the positioned glyphs
 * and a mapping between them. All methods can be inline (and fast).
 */
class GlyphBuffer {    
};

DrawTarget

DrawTarget is an abstraction for something you can draw into. They could get complex so backends can subclass this class freely. DrawTargets will need to be refcounted and heap-allocated.

This class is mostly stateless. The major piece of state is of course what we've rendered into the target. We also store a CTM and a current clip, since we often want to draw an arbitrary chunk of content with a given CTM and clip, and passing the CTM and clip around alongside the DrawTarget would be overly burdensome. But we give direct set/get access to the clip and matrix --- so no save/restore needed, and it'll be easy to set the clip to anything we want whenever we want. The clip and CTM are implicit parameters to all the drawing APIs.

Bas: I think the push/pop clip model D2D adopts is the most effective, it's fairly easy to implement in an API that allows you to 'set' and 'get' clips, but at the same time it's much more suited to a HW accelerated backends which will most likely need to create temporary surfaces to execute complex clips (or even anti-aliased rectangular ones perhaps). Clips which are only applied for a single drawing operation are probably better implemented intersecting the clip path with the operation path. For the CTM I think set/get sounds good.

The primary drawing primitives correspond to cairo's fill, stroke, mask and show_glyphs --- except for no paint(). It just doesn't seem useful.

We should also provide shortcuts that correspond to canvas-2D operations. This will offer maximum efficiency, so we don't have to spend cycles detecting special cases when the caller code already knows what the special case is.

Bas: So, obvious cases here I believe are covered here much like Direct2D does 'DrawRect' 'FillRect' 'DrawImage' etc. In for example the rectangle (and possibly ellipse, etc.) case this also allows us to reuse vertex buffers which already have the correct tessellations and such, since these are very common in general, not just in Canvas2D. As stated above, I think the Path stuff is a very bad idea.

enum ClockDirection { CLOCKWISE, ANTICLOCKWISE };

enum ContentType { ALPHA, OPAQUE, TRANSPARENT, COMPONENT_ALPHA };

// For DrawImage we don't want to create a full Pattern, and we don't want
// to pass in a FillStyle (the blur in FillStyle is only for the path mask,
// and there's no path in DrawImage), but we do want to set a filter
// and alpha.
class DrawImageOptions : public Options {
  Filter mImageFilter;
  double mAlpha;
};

class DrawTarget {
public:
  BackendType Backend() { return mBackend; }

  const Matrix& GetCurrentMatrix() { return mMatrix; }
  // Virtual so we can track matrix changes in the backend.
  virtual void SetCurrentMatrix(const Matrix& aMatrix) { mMatrix = aMatrix; }

  // The initial mClip will have the same backend type as this DrawTarget.
  const Clip& GetCurrentClip() { return mClip; }
  // aClip must have the right backend type.
  // Virtual so we can track clip changes in the backend.
  virtual void SetCurrentClip(const Clip& aClip) { mClip = aClip; }

  // Path construction
  // The "current path" we build here is only used for constructing
  // Path objects. No drawing operations use the "current path".
  virtual void PathMoveTo(Point aPt) = 0;
  virtual void PathLineTo(Point aPt) = 0;
  virtual void PathQuadraticCurveTo(Point aPt1, Point aPt2) = 0;
  virtual void PathBezierCurveTo(Point aPt1, Point aPt2, Point aPt3) = 0;
  virtual void PathArcTo(Point aPt1, Point aPt2, double aRadius) = 0;
  virtual void PathArc(Point aCenter, double aRadius, double aStartAngle, double aEndAngle, ClockDirection aDirection) = 0;
  virtual void PathRectangle(Point aPt1, Size aSize) = 0;
  virtual void PathAddPath(const Path& aPath) = 0;
  // Stores the current path into aDest where it can be used.
  // The current path is cleared. Guarantees that the backend type of aDest
  // is set to the backend type of this DrawTarget.
  virtual void FinishPath(Path* aDest) = 0;

  // As for paths, gradient construction is directly on the DrawTarget,
  // but nothing uses the "current gradient".
  // Unlike for paths, we need explicit Begin operations for gradients to
  // control what type of gradient is created (we may need to create a
  // native gradient object of the right type immediately).
  virtual void BeginLinearGradient(Point aFrom, Point aTo) = 0;
  virtual void BeginRadialGradient(Point aInnerCenter, double aInnerRadius, Point aOuterCenter, double aOuterRadius) = 0;
  virtual void AddGradientStop(double aOffset, Color aColor) = 0;
  // Stores the current gradient into aDest where it can be used.
  // The current gradient is cleared. Guarantees that the backend type of aDest
  // is set to the backend type of this DrawTarget.
  virtual void FinishGradient(Pattern* aDest) = 0;

  // Since a path is a backend-specific object for the backend we want to
  // eventually draw the path to, GlyphsToPath needs to be a method on the
  // DrawTarget. 
  virtual void GlyphsToPath(const ScaledFont& aFont, const GlyphBuffer& aBuffer,
                            uint32_t aCharStart, uint32_t aCharLength, Path* aPath) = 0;

  // Does a placement new on 'aBuffer' to set its vtable to the right impl.
  // Guarantees that the DrawTarget returned by Flush() will
  // use the same backend type as this DrawTarget, so Paths and other objects
  // created from one DrawTarget can be used with the other.
  virtual already_AddRefed<DrawBuffer> CreateDrawBuffer(IntSize aSize, ContentType aContent) = 0;
  // Convert aBuffer to a form optimized for this DrawTarget's backend.
  virtual already_AddRefed<SourceBuffer> Optimize(SourceBuffer* aBuffer) = 0;

  // Basic drawing primitives.
  virtual void Fill(const Path& aPath, const FillStyle& aStyle, const Pattern& aPattern,
                    Options aOptions) = 0;
  virtual void Stroke(const Path& aPath, const StrokeStyle& aStyle, const Pattern& aPattern, 
                      Options aOptions) = 0;
  virtual void FillMask(const Pattern& aMask, const FillStyle& aStyle, const Pattern& aPattern,
                        Options aOptions) = 0;
  virtual void FillGlyphs(Point aBaseline, const ScaledFont& aFont,
                          const GlyphBuffer& aBuffer, uint32_t aStart, uint32_t aLength,
                          const FillStyle& aStyle, const Pattern& aPattern,
                          Options aOptions) = 0;

  // Shortcuts for canvas (and other users).
  // ClearRect is not needed since it's just FillRect with operator CLEAR,
  // which is trivial to detect (and we don't need to
  // optimize for large numbers of ClearRects).
  virtual void FillRect(const Rect& aRect, const FillStyle& aStyle, const Pattern& aPattern,
                        Options aOptions) = 0;
  virtual void StrokeRect(const Rect& aRect, const StrokeStyle& aStyle, const Pattern& aPattern,
                          Options aOptions) = 0;
  virtual void DrawImage(SourceBuffer* aBuffer, Point aDest,
                         const DrawImageOptions& aOptions) = 0;
  virtual void DrawImage(SourceBuffer* aBuffer, const Rect& aDest,
                         const DrawImageOptions& aOptions) = 0;
  virtual void DrawImage(SourceBuffer* aBuffer, const Rect& aSource, const Rect& aDest,
                         const DrawImageOptions& aOptions) = 0;

protected:
  Matrix mMatrix;
  Clip mClip;
  BackendType mBackend;
};

DrawBuffers

We need a way to create a temporary buffer we can draw into and then use as a source. Typically we finish drawing into it, then start using it as a source and never draw into it again, so we should optimize for that.

/**
 * Abstraction of an intermediate buffer: a buffer you can render into
 * and use the results as a source image. Refcounted and heap-allocated.
 */
class DrawBuffer {
public:
  BackendType Backend() { return mBackend; }
  DrawTarget& Target() { return *mTarget; }
  // Returns a snapshot of the current state. A good implementation will
  // avoid copies by returning a wrapper around the current target buffer
  // and arrange that if there is a future drawing call on mTarget, and
  // the SourceBuffer wrapper still exists, we duplicate the target buffer
  // and start drawing into that.
  virtual already_AddRefed<SourceBuffer> Snapshot() = 0; 
protected:
  BackendType mBackend;
  RefPtr<RenderTarget> mTarget;
};

Additional APIs

We need a bunch of platform-specific APIs to create SourceBuffers, DrawBuffers and DrawTargets bound to platform objects.