Subpixel AA literally gets the colors wrong at the boundary, so calling grayscale AA a simplification is a bit too much IMO. Some people prefer subpixel AA and it might even be a matter of accessibility for others, so its existence is a good thing I guess, but it is not grayscale that makes any false assumptions.
Pixel boundaries on your display are imaginary, all you have are RGBRGB light sources next to each other. The pixel boundaries are determined by your left-most and right-most light sources on the display, so you have |RGB|RGB|..., but it really shouldn't affect how you render text in the middle of your screen.
edit:
> it is not grayscale that makes any false assumptions
Grayscale assumes that those pixel boundaries are real, and they are relevant.
How do pixel boundaries relate to the fact that you cannot avoid color fringing with subpixel AA? You could shift your whole framebuffer (or maybe just a glyph, with possible caveats) by a subpixel or two and have GBR or BRG pixels, it doesn't matter. What matters is that you need to keep the hue constant across triplets of subpixels to avoid fringing, and this is what grayscale does, and what subpixel does not do because it is its whole point. Subpixel basically assumes all your subpixels are the same color.
Is there any subpixel text rendering implementation that avoids color fringing to such a degree I cannot see it, or not? It's certainly there on both Windows and Linux.
I don't know if such rendering is commonplace, but I know for sure that imageworsener handles this correctly for downscaling images. Check out the option `-offsetrb 1/3`.
I don't know what you can and can't see, but I could not tell the difference between gray scaling a screenshot and not, on my mac back when they still used colored subpixel AA (unless I zoomed in on the screenshot).
This assumes you actually have RGB light sources in that order. There are (were?) consumer displays with RGBW or other alternative arrangements out there, not to mention panels with a different color ordering. And if you rotate or invert the display, the ordering becomes different.
An ordinary consumer will never understand what's going on here, so if your code assumes RGB subpixel rendering for text is always correct, it's just going to look ugly to them for no reason.
I was assuming that to make a point. The actual subpixel order can either be retrieved by an appropriate API (if available) or be provided by the user (display settings). Point being, subpixels don't usually overlap on the same area, which is what's grayscale's assumption. I agree that if subpixel order is not known, then grayscale is a sane default.
If your stroke is 7 subpixels wide, it will be, for instance, RGBRGBR. That has three Rs, two Gs, and two Bs. That is 50% more red light than blue or green. That is tinted red, no matter how you look at it.
Subpixel antialiasing does distort colours. That is just a physical fact.
The only reason it works at all is that on average, it will even out to a uniform white. But in the details, it will not. Especially around straight vertical edges.
That's a very naive and wrong way to do subpixel-AA. Roughly, to calculate the light intensity of a red subpixel, you must also take into account the red light intensity around a whole pixel area around that subpixel, centered on that subpixel. There are better filters to use here, this would be a box filter, which is 1-pixel wide.
No, it is not. It is the physically correct way to do it. 50% more red light is emitted than blue or green. That is a red tint. Your eyes aren't going to see that as anything other than a red-tinted line. That is what you will see. Sub-pixel antialiased text looks like it has colourful fringes. People see this. It is not in their imagination. It actually is there, and it is exactly because of sub-pixel antialiasing.
>No, it is not. It is the physically correct way to do it. 50% more red light is emitted than blue or green. That is a red tint.
That is a red tint. And that's why it's not the correct way to do it. A subpixel antialiased white line that is seven subpixels wide, on a black background, should not produce (8-bit RGB triplets): 0-0-0, 255-255-255, 255-255-255, 255-0-0.
I’m not agreeing or disagreeing with any of you as I don’t have enough knowledge on the topic, but what you say would effectively mean that there is no other possible color besides RGB, which is technically true, but is a nigh useless distinction as color blending is a real phenomenon that happens to human vision. In this way, subpixel hinting, while a hack with quite a few tradeoffs, just hijacks color blending used for producing color normally on your screen for sharper fonts.
> but what you say would effectively mean that there is no other possible color besides RGB
It would not at all mean that? I don't know why you would think it means that.
All I am saying is that a if you draw a vertical line 7 subpixels wide, the light it emits will not add up to white. This is a simple physical fact.
To make it easier to imagine, think of a one subpixel wide vertical line. It will be either pure red, pure green or pure blue depending on position. In no way will that ever look white.
> Pixel boundaries on your display are imaginary, all you have are RGBRGB light sources next to each other.
Kinda. From my testing (in my subjective perception) several years back, grayscale text looks slightly colored at the edges, but subpixel-rendered text looks more colored in the opposite direction. And in my testing, vertical lines look more fringey if they take up GBR or BRG subpixels than RGB. I suspect these are because the green subpixel is perceived as the brightest by humans, and looks better in the middle of a pixel, and RGB and BGR put it in the center so it works out.
1. Even if the rendering gets the subpixel order right, the subpixels are rarely equidistant on a display. They tend to be closer to each other within a logical pixel.
2. Some subpixel-AA algorithms are naive, and trade color accuracy to luminance accuracy, causing more color fringes.
3. Incorrect gamma handling.
4. If aiming for more luminance accuracy, incorrectly taking into account the relative luminance of the subpixels (what you observed with the brighter green subpixels).
Imageworsener[1] has an option to downscale with subpixel antialiasing, it's gamma correct by default, and it has an option for adjusting the subpixel offsets within a pixel (instead of the naive default of 1/3). It might be a good way to try out correct subpixel-AA, but it only works on images. I guess you can render text on high resolution and downscale that with imageworsener, but you won't get any hinting that way, but maybe that's desired.
Very few software does AA correctly, let alone subpixel-AA.
I'd actually prefer a Bayer panel over the traditional stripes, if it's sufficiently dumb in "native" mode and there exists a post-process shader for the GPU to monolithically handle reverse-debayering, overdrive (LCD response speeds are non-linear), dithering, and color-mapping. Ideally, the latter even able to use a head tracker to compensate for viewing-angle-dependent gamma curves (the main pitfall of MVA panels for semi-static content).
Bayer can't display pixel art at 1:1, just like with bitmap fonts.
If you want crisp color, you need to be aware of the Bayer tiling, and ideally you'd want to do the inverse-debayering on the source-side of the display cable, because that way you can save 4x bandwidth (you can treat a Bayer panel like a High-DPI one while rendering, but Bayer-pixels need to match RGB source pixels for crisp results).
Do keep in mind that many TV panels are Bayer panels, as they tend to consume yuv420 or yuv422, if fed externally, which greatly reduces the bandwidth penalty of doing the 24bit to 8bit per-pixel color depth conversion in the display's panel-driver-GPU.
For text rendering there are some alternative subpixel layouts like Pentile in use for smartphones, but patents and software/hardware integration (software in this case including Freetype or whatever replacement is used) inhibit usage of these for general purpose desktop monitors. One large benefit of these tends to be their vertical/horizontal resolution being matched, allowing both portrait and landscape mode to be crisp. Also, less physical pixels to drive take less power to drive.
I guess that my question is also why source pixels would be "RGB", when "Bayer" is closer to the human visual system.
Because it's easier for people that don't know much about the human visual system to work with a (more) flawed model ?
However, while abstractions have a benefit, it's kind of weird that even the "closest to the metal" model seems to often take "Bayer/420/422" as more of an approximation, rather than "RGB" being one ?
Do you see how weird it is that the above-mentioned TV panels are NOT (?) Bayer panels physically ?