I got the itch to try and create an Atkinson dithering paint program.
What do I mean by that? Imagine a grayscale paint program (like, say, Procreate, just all grays) but all the pixels go through an Atkinson dither before hitting the screen.
To the artist you're kind of darkening or lightening the dither, so to speak, in areas of the canvas with your brush/eraser. (Kind of crowding or thinning the resulting B&W pixels).
An hour spent with Claude to make this happen in HTML5 caused me to set the experiment aside. It was okay, but only did the dither after the mouse was released. I wasn't driven enough to try to get it to dither in real-time (as the brush is being stroked).
The mouse is a terrible painting tool too — with a touch interface on an iPad (again, like Procreate) it might be worth pursuing further. It would need to be very performant as I say — so that you could see the dither as the brush is moving. (This might require a special bitmap and code where you store away the diffusion error so that you can update only a portion of the screen — where the brush has move - rather than having to re-dither the entire document 60 fps.)
I distinctly remember that "Paintbrush" in Windows 3.1 had something very similar to this. Check it out in the Win3.1 emulator: https://archive.org/details/win3_stock -- open Paintbrush, go to Options -> Image Attributes and set "Colors" to "Black and White".
However, the dithering there is not fixed to the background but depends on your brushstroke / mouse position.
This is correct. Atkinson dithering looks cool (at least to my eyes) but is not the best choice for real time work. In particular, it cannot be implemented in a shader, although you can approximate it. But computers are fast enough that a CPU-bound algorithm can still work at interactive speeds. Not sure how well it would work in an editor though - in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
I did an implementation of Atkinson dithering for a web component in case anyone is feeling the itch to dither like it is 1985.
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
My thought is to store the error for each pixel in a separate channel. When a portion of the bitmap is "dirtied" you could start in the top left of the dirty rectangle, re-Atkinson until, once outside the dirty rect, you compute the same error as the existing error for a given pixel. At that point there would follow the same dither pattern and you can stop.
As you say, it's conceivable you would have to go to the very end of the document. If the error is an integer though I feel like you would hit a point where you can stop early. Maybe I am misunderstanding how error diffusion works or grossly misjudging how wild mismatched the before/after errors would be.
Neat idea, be curious to see it implemented! You can probably get away with just stopping at the bounds of the dirty area. Sure, the error would accumulate, but they'd be localized to a single pixel. I doubt it would be visible.
One issue I suspect you'll encounter is that redrawing a restricted area during movement will cause that area to flicker weirdly while the rest of the image stays relatively stable.
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
For a paint program, I think it would be acceptable if painting with the brush never changed existing pixels, only pixels newly painted with the brush, and you'd apply dithering only to newly added pixels in the brush stroke as the mouse dragged. The fact that you might be able to kinda see the discontinuities at the edge of the brush feels like it would be a feature, not a bug -- that you can see the brush strokes.
The really interesting effect would come when you implemented a dodge or burn tool...
The trouble is that dithering works by smearing the error over an area of pixels in a way that your brain unsmears into something close to the original image. If you start manipulating some pixels but not nearby areas then you will get very visible artifacts.
Maybe that is OK if you are drawing straight lines and boxes but any kind of detail is going to be destroyed.
Well sure, that's why you work in grayscale and only dither in the end if you want to maximize quality and detail.
I'm talking about from an artistic perspective. The way you see the brush strokes in certain styles of painting, it would add character to see where two dithered areas of the same lightness had a slightly visible discontinuity. I think it could be a very cool artistic effect.
What do I mean by that? Imagine a grayscale paint program (like, say, Procreate, just all grays) but all the pixels go through an Atkinson dither before hitting the screen.
To the artist you're kind of darkening or lightening the dither, so to speak, in areas of the canvas with your brush/eraser. (Kind of crowding or thinning the resulting B&W pixels).
An hour spent with Claude to make this happen in HTML5 caused me to set the experiment aside. It was okay, but only did the dither after the mouse was released. I wasn't driven enough to try to get it to dither in real-time (as the brush is being stroked).
The mouse is a terrible painting tool too — with a touch interface on an iPad (again, like Procreate) it might be worth pursuing further. It would need to be very performant as I say — so that you could see the dither as the brush is moving. (This might require a special bitmap and code where you store away the diffusion error so that you can update only a portion of the screen — where the brush has move - rather than having to re-dither the entire document 60 fps.)