logoalt Hacker News

virtualritzlast Wednesday at 9:53 AM1 replyview on HN

Unless I miss something I think that this describes box filtering.

It should probably mention that that this is only sufficient for some use cases but not for high quality ones.

E.g. if you were to use this e.g. for rendering font glyphs into something like a static image (or a slow rolling title/credits) you probably want a higher quality filter.


Replies

jstimpflelast Wednesday at 11:07 AM

What type of filter do you mean? Unless I'm misunderstanding/missing something, the approach described doesn't go into the details of how coverage is computed. If the input image is only simple lines whose coverage can be correctly computed (don't know how to do this for curves?) then what's missing?

I'd be interested how feasible complete 2D UIs using dynamically GPU rendered vector graphics are. I've played with vector rendering in the past, using a pixel shader that more or less implemented the method described in the OP. Could render the ghost script tiger at good speeds (like 1-digit milliseconds at 4K IIRC), but there is always an overhead to generating vector paths, sampling them into line segments, dispatching them etc... Building a 2D UI based on optimized primitives instead, like axis-aligned rects and rounded rects, mostly will always be faster, obviously.

Text rendering typically adds pixel snapping, possibly using byte code interpreter, and often adds sub-pixel rendering.

show 2 replies