Description
Similar to how 3d scenes are "GPU-driven" (in that the textures, geometry, entities - the whole lot - are stored on the GPU in buffers), I'd like to do the same for 2d scenes.
(Answers from the future are annotated with 🔮.).
Complaints about the current landscape
A recent blog post has had a lot of attention lately that describes some of the pain points of picking a Rust GUI library.
The post kinda boils down to:
- must have good performance
- not too many draw calls
- in the author's opinion "text looks awful when rendered on the GPU" (I don't agree - see the discord link below)
- GPU re-rendering every frame is a battery killer (I think this is an application problem, really)
- 🔮 Definitely an application problem - you don't have to use something like
winit::ControlFlow::Poll
, you can wait
- 🔮 Definitely an application problem - you don't have to use something like
- must be maintained
- should be native Rust, not loaded from a widget file/XML etc
- damage tracking
- 🔮 Not so sure about this one, there's lots of different types of damage tracking and rendering really can be incredibly fast, so I wonder about how important this really is
Adjacent work
The Zed team has been working on "GPUI" which seems to be a similar idea to what renderling
could be.
I like the idea of using SDFs for drawing primitives.
(🔮 SDFs are hard because there is no great abstraction for all of them apart from using a texture)
What to do here
At the very least we could provide a "GPU-driven" 2d render graph. This would allow downstream libraries to build well performing GUIs on top.
Requirements / Obstacles
- Text
- How do we store it?
We could parse fonts into contours (possibly with font and then store the glyph as a collection of beziers that we shade with an SDF - similar to what I did with gelatin usingFontyFruity
.- 🔮 A bitmap is cached locally on the CPU in
UiText
and then stored in theAtlas
. After that it's a BAUHybrid<Renderlet>
setup.
- 🔮 A bitmap is cached locally on the CPU in
- It changes often and often the specific glyphs and their order are not fully known in advance
The "dynamic" spots of text could be limited by a max length, making them storable in a buffer likeGpuVertex
.- 🔮 This turned out to be unnecessary after changes to the slab API. When text changes the cache generates an updated bitmap and mesh which are sync'd to the GPU. The previous allocation of geometry is dropped and recycled automatically.
- How do we store it?
- Events
- We could help out downstream libs by doing hit testing, etc on the GPU since we'll be culling on the GPU anyway.
- Drawing other filled/outlined primitives
- circle
- rectangle
- rounded rectangle
- polygon
- begin, line_to, bezier_to, etc, end
- variable color stroke and separate variable color fill
- image fill with vertex color multiply
- gradient fill and gradient stroke (probably proxy to images under the hood)
https://discord.com/channels/318590007881236480/318590007881236480/1108469439168532540