What generation of GPU would you need to get any significant & supported benefit out of this, when completed? The discussion about the fallback seems to imply that some pretty rare feature is going to be used at least for some rendering paths. I'm a bit worried that this is then reserved for some current- or previous-gen GPUs, but if you're not amongst the lucky few, you're going to skip straight to CPU.
And if that fallback isn't fast/easy/ubiquitous enough, a widget library using this, might have a different/legacy renderer altogether, with all the chaos this entails.
I've put some thought into this question, and there's some flexibility - it's possible to take the basic ideas and put more work into compatibility to work on even more hardware, but that's a cost/tradeoff curve, and affects overall system complexity and especially the difficulty of extending the imaging model.
My current target is a GPU that can do compute shaders, and also has descriptor indexing. That includes DX12, Metal 2.0, and most desktop Vulkan. It leaves out DX11 and OpenGL. Mobile is complicated; I don't expect it to work on most legacy Android devices.
The need for descriptor indexing can be worked around, which I think in practice brings Android with Vulkan back in the fold, and it's possible DX11.
It's more a function of API and driver than actual hardware, except for very old and low-spec stuff. I suspect that Raspberry Pi up to 3 is off the table without massive work, but that Vulkan drivers on the 4 will catch up. (I have a Raspberri Pi 4 and intend to use it as a compatibility target, along with a Pine64)
I hope that clarifies the situation. GPU compute was not mainstream at all (aside from CUDA, which has been around a while) and until quite recently, targeting it would seriously limit an app. I do think that's changing, even on inexpensive hardware.
The problem of supporting older hardware raises a question for me: Why is it worth it to reimplement 2D graphics using such cutting-edge GPU features? Isn't that just contributing to what many people perceive as the upgrade treadmill?
Take those legacy Android devices you mentioned. If CPU rendering, or more limited use of the GPU, was good enough for those devices when they shipped, why isn't it good enough now? Do we really need to keep increasing resolution, frame rate, color depth, or whatever, at the cost of leaving behind people stuck on older hardware and adding to e-waste?
At some point it has to stop though. Computers have been mass-market products for at least 30 years, depending on how you define "mass-market". How much longer are we going to keep making formerly usable computers obsolete?
> How much longer are we going to keep making formerly usable computers obsolete?
For as long as increases in performance and energy efficiency open up the potential for new uses.
I don't see the problem here. You can still use the old computers with the old applications. Just like you can still use hand-drawn carts – doesn't mean that we shouldn't develop the horse-drawn cart, even though they made the hand-drawn carts obsolete. Same goes for trucks and horse-drawn carts.
And if that fallback isn't fast/easy/ubiquitous enough, a widget library using this, might have a different/legacy renderer altogether, with all the chaos this entails.