Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That isn’t true. Again, please look more closely at the first example in the article, and take the time to understand it. It demonstrates there’s a better method than what you’re suggesting, proving that clipping to pixels and summing the area is not the best visual quality you can get.


As pointed out by Raphlinus, the moire pattern in the Siemens star isn't such a significant quality indicator for the type of content usually encountered in 2D vector graphics. With the analytical coverage calculation you can have perfect font/text rendering, perfect thin lines/shapes and, by solving all the areas at once, no conflating artifacts.


Raph made an argument that Box is good enough for lots of things, which is subjective and depends entirely on what things you’re doing, and how much you actually care about quality.

You are claiming it’s the best possible. Box filter is simply not the best possible, and this fact is well understood and documented.

You can relax your claim to say it’s good enough for what you need, and I won’t disagree with you anymore. Personally, I’m sensitive to visible pixelation, and the Box Filter will always result in some visible pixelation with all 2D vector graphics, so if you really care about high quality rendering, I’m very skeptical that you really want Box filtering as the ideal target. Box filter is a compromise, it’s easier & faster to compute. But it’s not the highest quality. It would be good to understand why that’s the case.

* Edit to further clarify and respond to this:

> With the analytical coverage calculation you can have perfect font/text rendering, perfect thin lines/shapes and, by solving all the areas at once, no conflating artifacts.

You cannot get perfect font or text rendering with a Box filter, and you will get some conflating artifacts. They might be very slight, and not bothersome to most people, but they do exist with a Box filter, always. This is a mathematical property of Box filtering, not a subjective claim.


Why do conflation artifacts always exist with a box filter? AFAIK conflation artifacts are a product of the compositing process, not the filtering process.

If you have two non-overlapping shapes of the same color covering the plane and use a box filter on the first shape to sample a pixel on the boundary, and then use the same box filter on the second shape, and then composit them with alpha blending, you get a conflation artifact along the boundary where the background bleeds through.

But if you use the fact that the shapes are non-overlapping and sum their contributions instead, the artifact disappears, while still using the same box filter.


It’s because sampling artifacts never disappear with Box. The reason is the high frequency aliasing is introduced by the filtering. It’s because the Box itself has infinite frequency response that you cannot eliminate the artifacts, it’s not possible. This is why all other, better filters fade their weight smoothly to zero at the support boundary.

You can see this with a single sharp edge, it doesn’t need to involve multiple polygons, nor even vector rendering, it happens when downsampling images too.


These are sampling artifacts, but I believe yorbwa is correct in distinguishing these from conflation artifacts, as defined in Kilgard & Bolz. I think of the latter as compositing not commuting exactly with antialiasing (sampling). You only get conflation artifacts when compositing multiple shapes (or rendering a single shape using analytical area when the winding number is not everywhere 0 or 1), while you definitely see aliasing when rendering a single shape, say a Siemens star.


Okay, that’s fair. I’m misusing the term ‘conflation’ in that sense. I was trying to make the point that compositing two wrong answers yields a wrong answer, but I stand corrected that it’s not the compositing that’s introducing error, it is the sampling + box-filtering.


I don't see how you can support the claim of perfect thin line rendering, it's visibly just not very good. So box filtering logically can't possibly be the best possible quality.

Can we make a magical adaptive filter which resembles box filter for half-planes, a tent filter for thin lines, Mitchell-Netravali or oblique projection for natural images, and Gaussian when filtering images for which high frequency detail is not important? Perhaps, but that feels like advanced research, and also computationally expensive. I don't think you can claim "perfect" without backing it up with human factors data really demonstrating that the filtered images are optimum with respect to perceived quality.


I wonder if the ideas behind GIMP's new nonlinear resampling filters (NoHalo and LoHalo, and eventually more https://graphicdesign.stackexchange.com/q/138059) may translate to vector rasterization in some form (though here we're translating continuous to discrete, not discrete to continuous to a differently spaced discrete).


Backing up to your earlier comment. Pixels on some displays are in fact little squares of uniform color. The question then is how to color a pixel given geometry with detail within that square.

All of this "filtering" is variations on adding blur. In fact the article extends the technique to deliberately blur images on a larger scale. When we integrate a function (which could be a color gradient over a fully filled polygon) and then paint the little square with a solid "average" color that's also a form of blurring (more like distorting in this case) the detail.

It is notable that the examples given are moving, which means moire patterns and other artifacts will have frame-to-frame effects that may be annoying visually. Simply blurring the image takes care of that at the expense of eliminating what looks like detail but may not actually be meaningful. Some of the less blurry images seem to have radial lines that bend and go back out in another location for example, so I'd call that false detail. It may actually be better to blur such detail instead of leaving it look sharper with false contours.


Yes it’s a good point that LCD pixels are more square than the CRTs that were ubiquitous when Alvy Ray wrote his paper. I think I even made that point before on HN somewhere. I did mention in response to Raph that yes the ideal target depends on what the display is, and the filter choice does depend on whether it’s LCD, CRT, film, print, or something else. That said, LCD pixels are not perfect little squares, and they’re almost never uniform color. The ideal filter for LCDs might be kinda complicated, and you’d probably need three RGB-separated filters.

Conceptually, what we’re doing is low-pass filtering, rather than blurring, so I wouldn’t necessarily call filtering just “adding blur”, but in some sense those two ideas are very close to each other, so I wouldn’t call it wrong either. :P The render filtering is a convolution integral, and is slightly different than adding blur to an image without taking the pixel shape into account. Here the filter’s quality depends on taking the pixel shape into account.

You’re right about making note of the animated examples - this is because it’s easier to demonstrate aliasing when animated. The ‘false detail’ is also aliasing, and does arise because the filtering didn’t adequately filter out high frequencies, so they’ve been sampled incorrectly and lead to incorrect image reconstruction. I totally agree that if you get such aliasing false detail, it’s preferable to err (slightly) on the side of blurry, rather than sharp and wrong.


I don’t know of any display technology in which pixels are little squares, if you really get out the magnifying glass.


Would DLP projectors which distribute color over time (a color wheel) or multiple light sources combined with dichroic filters, produce uniform squares of color?


In theory if the DMD mirrors were perfect little squares, and if the lens has perfect focus, and if the mirrors switch infinitely fast and are perfectly aligned with the color wheel in time, then maybe it’d be fair to call them uniform squares of color. In reality, the mirrors look square, but aren’t perfect squares - there’s variance in the flatness, aim, edges & beveling, and also both the lens and mirror switching blurs the pixels. The mirror switching over time is not infinitely fast, so the colors change during their cycle (usually multiple times per color of the wheel!) Not to mention some newer DLPs are using LEDs that are less square than DMD mirrors to begin with.

All this comes down to the projected pixels not being nearly as square as one might think (maybe that’s on purpose), though do note that squares are not the ideal shape of a pixel in the first place, for the same reason box filtering isn’t the best filter. If your pixel has sharp edges, that causes artifacts.

Take a look at the pixel-close-up comparisons in the projection shoot-out: https://www.projectorcentral.com/Projector-Resolution-Shooto...

Notice how all of them are visibly blurrier than the source image, and even that all of them have visible aliasing.

Also just for fun, check out this interesting video showing what DMD mirrors look like under a microscope: https://youtu.be/KpatWNi0__o




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: