Isn't that a bit risky to go this deep? If I understand things correctly, you are currently trying to build a font editor in Rust(runebender). Because there is no good UI library in Rust, and there's good reason to think Rust is a good fit for this, you want to build your own (Druid). So far so good. But, now you are also trying to build your own 2D renderer and not a basic one: you want to advance the state of the art in that field too.
Not that I'm not exited, because if all this lands, it will be super great for the Rust ecosystem, but sometimes because you're so deep in the rabbit hole, I'm afraid you get burned-out (or bored) and that everything would be lost because nothing would be polished enough to be usable at that point.
Is that a risk you're taking on purpose? Or is it just that you have a plan to avoid it?
This is a good question, and I'm well aware of the broad scope of my projects, especially spanning low to high level.
A few things might help put this in context. First, Druid well predated Runebender, though Runebender is now the "hero app" (before, it was xi-editor, now on the back burner).
Second, I have a lot of experience doing high performance 2D and font rendering, from libart to working on the PDF 1.4 blend modes in Ghostscript to font-rs. I've also been curious about GPU for a while, not least because that was what was actually painting the text on Android, and I worked on the Android UI team for almost 5 years. So now that GPU compute is becoming mainstream, I see an almost unique opportunity to apply my skills there.
Third, I'm not doing all this by myself. I'm deliberately trying to cultivate open source community involvement and, really, building a team, though it has diffuse borders. Now is also a good time to acknowledge my long-time collaboration with Colin Rofls, who does a huge amount of the day to day work on both Runebender and Druid.
Lastly, I published this in advance of a career move, which is becoming a research software engineer on the Google Fonts team. I believe this is a perfect position for me to continue pushing this research agenda forward, and I'm hoping will bring more resources and focus.
There is no chance of me getting bored doing this. Burnout is something I'm watching carefully, as I have experienced it (it was the end of my time on Android), but one thing I discovered in my 2.5 years away from being a full-timer at Google is that graphics research is what I love the most, and would probably be doing even if it wasn't also a good career move.
> A few things might help put this in context. First, Druid well predated Runebender, though Runebender is now the "hero app" (before, it was xi-editor, now on the back burner).
I imagine that as you're trying to build a community around the whole stack, some more "hero apps" would be welcome. What other apps would potentially align with your vision?
I think these will emerge organically in time, but I think to really exercise the power of piet-gpu in particular, we'll want to see apps that display or visualize huge amounts of data. I'd love to see scientific visualizations.
As I posted to a similar request, this is an arcane field, and different things are meant by 2D - in games, it's largely painting sprites from sprite sheets, I think of it as involving vector graphics, and there are other contexts.
I’m not someone of Raph’s caliber, but this reminds me of my own career rabbit hole.
In 2002 I was a film student with a “new media” side job (that’s what they used to call web/Flash development). I had cleared my schedule for six months so I could work on an ambitious animation project mixing 2D and 3D. As I got started, I figured I’d need to write some scripts for a 3D application I was using (Softimage). Looking deeper, I came to the conclusion my needs were complex enough that I should write a small separate application rather than scripts. Then I realized my application would need a little compositing engine to mirror what existed in Softimage. To make that work fast enough, I needed to learn about GPU shaders (brand new at the era). Also I needed to learn about YUV pixel formats and media containers...
Flash forward 18 years, I’m still at the bottom of this rabbit hole, sometimes halfheartedly looking for the door to go back up but just finding more glimpses into weirder wonderlands. I wrote another YUV compositor just a couple of months ago, but haven’t made a serious attempt at animation in probably 15 years.
I wonder if other people have this kind of experience, and whether there’s some kind of law of “tech stack career gravity” in evidence here. When you become expert in something lower level, it increases the “career delta-v” to move back up.
It's a "forever project". [0] If you look at a lot of luminary careers, they also involve forever projects that happen to produce a digestible result. There's an element of needing a bit of direction and structure so that you don't go full "mad scientist" and lose your grounding.
For me, the actual forever project, if I had to put words to it, is something to do with studies in the cybernetics of creativity, or in ELI5 terms: "why is making games hard?" It's not really approachable just by making a slightly better tool, or a great tutorial, or a magic algorithm, or by organizing a great team for a single project - and the scope is multidisciplinary, making it less legible what I'm doing or why. But if I examine all my attempts there is an orbit to it that is gradually getting tighter.
With respect to examining low level issues in computing there's a huge pull to engage in forever projects because everywhere you look, the solutions being built on are rough-and-ready ideas rushed into production decades ago and then examined only occasionally afterwards. My own efforts started getting sucked into the gravity well of "perhaps I need to write a language, an operating system and a runtime environment to properly express this" - I managed to climb out again recently by focusing on formats, data portability and conceptual specifications instead of concrete ones with actively maintained production code - it's always the maintenance that is a killer and I can see that being an issue for Piet's current strategy too. Our ability to conceptualize always outstrips the daily grind, but we need to have one to stay in that grounded mode.
On the other hand, looking at Raph's previous activities, I wouldn't be too surprised if all those other things you describe are just project management kung-fu to put him in a position where he just hast to continue to do research that advances the state of 2D rendering.
> that everything would be lost because nothing would be polished enough to be usable at that point.
You mean like abandoning Xi in a "not good for anything" state after making big promises?
Sorry for being "this guy" but I'm extremely disappointed about the outcome of the Xi project...
Half way in coming to the conclusion that "building a proper text editor is just to complicated and tedious" is a complete joke. It was clear form day one that solving the over half decade old text editor issue would be a very big thing! And I'm better not saying what I'm thinking if this wasn't clear.
I feel fooled. No matter how smart this person is now I'm extremely skeptical to anything coming form that direction. I have just no faith in people that give up because things start to look complicated. Especially as it was obvious upfront that things will get very very hairy. There's a reason the text editor issue isn't solved until this day and we have to deal with a bunch of 80% solutions.
And now you're fee to vote me into oblivion because I've dared to criticize one of the geniuses out there. (No, I'm not creating throw-aways to voice an unpopular personal opinion as it's usual here; as I am standing to what I'm saying).
One thing that Raph is exceptionally good at is building a community around these sorts of things, and guiding it in the right direction. None of these things are being built by Raph alone, though he’s definitely still a primary worker in a lot of them.
I think Rust and Haskell, due to their irrelevance in industry and general "nerdiness", attract a certain type of hacker. Specifically I've noticed that there are some libraries for both languages that are just so good that they make you never want to use anything else. One example in Haskell is Megaparsec [0], by far the best point in the parser framework design-space I'm aware of. An example in Rust is wgpu-rs, which maybe isn't "all the way there" yet but is already turning into a very ergonomic low-level rendering API. Although I haven't used them, I think piet-gpu and pathfinder may be in this category also. I don't have a link handy but I remember seeing a tweet from the developer of Pathfinder where he was reverse-engineering MacOS's idiosyncratic font rendering so he could generate identical-looking output as the native renderer. That's going above-and-beyond what any reasonable library developer would do and I love it.
I'm not saying there aren't any great libraries for other languages (there certainly are, like Halide for C++) but that I'm more often pleasantly surprised by the library quality in Haskell and Rust than I am in other languages (when there are libraries available, anyway).
I'm not sure you can really characterise Rust as irrelevant to industry anymore (with some of the most widely used software written in Rust and FAANG companies hiring core developers), but I do agree that it has an unusually high quality standard for libraries (I'm unfamiliar with the Haskell ecosystem but I hear good things).
However, I think that while some of it is certainly cultural, a lot of it is due to the type systems in these languages being a lot more powerful which allows for more expressive APIs and more robust error handling, etc.
I was working on a decentralized website backend with a few people and while Rust has some really good libraries, damn does Diesel suck. Because of that we moved to Typescript.
Agreed that Diesel isn't great. I think it trie to be too ambitious with the type safety and shot itself in the foot. SQLx is decent, although I'm sure someone will come up with a better "Full ORM" in time.
Wgpu-rs is also going down the rabbit hole of re-inventing the world: in addition to implementing WebGPU and targeting all the GPU APIs that are available, we are also trying to do all the shader translation ourselves now (project Naga). I hope to not be burned by this.
Pathfinder future is unclear. It's not ready to be integrated into WebRender yet, and Patrick no longer works for Mozilla. I see no activity on their projects. Would appreciate if they show up here and clarity.
But with Amazon and Microsoft hiring core developers, Amazon and Dropbox using it for production services, Microsoft developing official bindings, Google using it for core components in their new Fuchsia OS and Facebook writing their whole Libra/Diem/whatever currency project in Rust, it can hardly be called irrelevant to the industry.
Rust is gaining traction very quickly, even with the big players.
It's definitely smaller than C++. C? Doubtful. And in no way is it "tiny" compared to anything. C++ might feel enormously bigger but that's because everything is actually specified.
Imagine how big the borrow checker specification would be if it existed! Or the specification for the requirements for writing safe `unsafe` code - last time I checked nobody was really sure what the precise answer to that even is.
Glad I wasn't the only one! I could probably shoehorn a cuda backend into repiet... but I don't even know where to start on generating optimized code from this horrible stack-based language.
I've been playing with OpenCL lately. Is such a shame that the support is so spotty, and even Apple(who I think started it, probably as a CUDA alternative for their AMD cards), are deprecating it.
Even with it not being able to squeeze all the perf from a device, it's still handy, especially being a cross device api.
It would've been ideal for piet, especially since compute kernels can also be run on CPU on all cores, given a CPU driver. And I think some OpenCL implementations support spirv. So it wouldn't have needed any fallbacks, the same kernels would run either on CPU or GPUs.
I'm still going to use it for my project. But, it's a shame that AMD RoCm OpenCL doesn't support running kernels on their CPUs.
Also, Rust might benefit from something like SYCL. There is some attractive convenience in having a DSL built into the language for parallel computing. But one can dream.
What generation of GPU would you need to get any significant & supported benefit out of this, when completed? The discussion about the fallback seems to imply that some pretty rare feature is going to be used at least for some rendering paths. I'm a bit worried that this is then reserved for some current- or previous-gen GPUs, but if you're not amongst the lucky few, you're going to skip straight to CPU.
And if that fallback isn't fast/easy/ubiquitous enough, a widget library using this, might have a different/legacy renderer altogether, with all the chaos this entails.
I've put some thought into this question, and there's some flexibility - it's possible to take the basic ideas and put more work into compatibility to work on even more hardware, but that's a cost/tradeoff curve, and affects overall system complexity and especially the difficulty of extending the imaging model.
My current target is a GPU that can do compute shaders, and also has descriptor indexing. That includes DX12, Metal 2.0, and most desktop Vulkan. It leaves out DX11 and OpenGL. Mobile is complicated; I don't expect it to work on most legacy Android devices.
The need for descriptor indexing can be worked around, which I think in practice brings Android with Vulkan back in the fold, and it's possible DX11.
It's more a function of API and driver than actual hardware, except for very old and low-spec stuff. I suspect that Raspberry Pi up to 3 is off the table without massive work, but that Vulkan drivers on the 4 will catch up. (I have a Raspberri Pi 4 and intend to use it as a compatibility target, along with a Pine64)
I hope that clarifies the situation. GPU compute was not mainstream at all (aside from CUDA, which has been around a while) and until quite recently, targeting it would seriously limit an app. I do think that's changing, even on inexpensive hardware.
The problem of supporting older hardware raises a question for me: Why is it worth it to reimplement 2D graphics using such cutting-edge GPU features? Isn't that just contributing to what many people perceive as the upgrade treadmill?
Take those legacy Android devices you mentioned. If CPU rendering, or more limited use of the GPU, was good enough for those devices when they shipped, why isn't it good enough now? Do we really need to keep increasing resolution, frame rate, color depth, or whatever, at the cost of leaving behind people stuck on older hardware and adding to e-waste?
At some point it has to stop though. Computers have been mass-market products for at least 30 years, depending on how you define "mass-market". How much longer are we going to keep making formerly usable computers obsolete?
> How much longer are we going to keep making formerly usable computers obsolete?
For as long as increases in performance and energy efficiency open up the potential for new uses.
I don't see the problem here. You can still use the old computers with the old applications. Just like you can still use hand-drawn carts – doesn't mean that we shouldn't develop the horse-drawn cart, even though they made the hand-drawn carts obsolete. Same goes for trucks and horse-drawn carts.
Unfortunately, both 2D graphics and the programming of Vulkan compute shaders are very arcane topics, and the intersection of the two basically maxes out.
Re 2D graphics, a long term ambition of mine is to write a book on the topic (and I have a repo started with a rough, bullet point outline), largely because I don't know of any source that brings the concepts together. I don't think I'll be able to spend significant time on it, though, as I have my hands pretty full. I am hoping to do more explaining and public communication as part of my piet-gpu work.
A good snapshot of GPU font rendering work from 3 years ago is: https://aras-p.info/blog/2017/02/15/Font-Rendering-is-Gettin... . The linked papers are quite good and have informed my thinking, though of course I hope to advance the state of the art even further.
Isn't that a bit risky to go this deep? If I understand things correctly, you are currently trying to build a font editor in Rust(runebender). Because there is no good UI library in Rust, and there's good reason to think Rust is a good fit for this, you want to build your own (Druid). So far so good. But, now you are also trying to build your own 2D renderer and not a basic one: you want to advance the state of the art in that field too.
Not that I'm not exited, because if all this lands, it will be super great for the Rust ecosystem, but sometimes because you're so deep in the rabbit hole, I'm afraid you get burned-out (or bored) and that everything would be lost because nothing would be polished enough to be usable at that point.
Is that a risk you're taking on purpose? Or is it just that you have a plan to avoid it?
Thanks in advance.