Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drawing things on a canvas #7

Closed
piegamesde opened this issue Feb 4, 2021 · 9 comments
Closed

Drawing things on a canvas #7

piegamesde opened this issue Feb 4, 2021 · 9 comments

Comments

@piegamesde
Copy link
Contributor

Heyho, I've hit a fun limitation again 🙃

Basically, I want to draw on a gtk::DrawingArea. So I've connected a hook that passes the cairo::Context to that area along like I always do. But it draws all over the place, ruining my window. I suppose this is because the woab callback is called some time after the original callback, and that I'm not supposed to use the context at that time. The glitches are then caused by race hazards with other drawing routines.

Possible potential remedies but idk:

  • Use conventional callbacks and Rc all the things I need for drawing (not again 😩). If only I could Rc a whole Actor, but I don't know how so I don't think this is easily achievable (from the Actix documentation: "Actors cannot be referenced directly, only by their addresses.").
  • Someone suggested drawing to an off-screen Image surface. This sounds reasonable because I simply need to Rc that one surface, but looking at the details this promises to make updating the image a callback hell.
@idanarye
Copy link
Owner

idanarye commented Feb 15, 2021

Actually, you can Rc (Rc<RefCell> actually) a whole Actor:

struct SharedActor<A: Actor>(Rc<RefCell<A>>);

impl<A: Actor> Actor for SharedActor<A> {
    // route methods to mutably borrowed A
}

impl<A, T> Handler<T> for SharedActor<A>
where
    A: Actor,
    T: Message,
    A: Handler<T>,
{
    // route methods to mutably borrowed A
}

impl<A, T> StreamHandler<T> for SharedActor<A>
where
    A: Actor,
    A: StreamHandler<T>,
{
    // route methods to mutably borrowed A
}

But I'm not sure how well it'll work if you use the actor context. Not sure if you even can use it... so scrap that.

@idanarye
Copy link
Owner

Unlike #2, where it was enough to set the result in an inhibit closure, here we actually care about the actor state. So I'm thinking about crank the Tokio reactor inside the signal handler. I don't like doing this, because I don't want to hog the CPU while a signal is running and because I don't want to crank the Tokio reactor multiple times (once per signal - though maybe it's not that bad because it won't fire) in a single iteration of the GTK event loop, so I'm going to design it in a way that'll discourage overuse.

Because we need to wait for response, I will need - like you suggested in #10 - to use Message with SendWrapper or with Fragile.

  • Salt I: While I can use an Addr to route these signal handlers, I'm going to insist on using the actor Context to enforce the actor to be in the same thread as the GTK loop.

To define the signal we'll use a new custom derive procmacro:

#[derive(woab::BlockingBuilderSignal)]
struct DrawSignal(gtk::DrawingArea, cairo::Context);
  • Salt II: Unlike BuilderSignal, BlockingBuilderSignal will not support an enum. I want to discourage users from lazily overloading a blocking signal with all their other, non-blocking signals.

I'm also pondering using a separate function to connect it:

(...).instantiate().actor()
    .connect_signals(RegularSignal::connector())
    .connect_blocking_signal(DrawSignal::connector())
    ...

While I don't strictly need this as a salt, it's probably going to be easier to implement it that way than having a trait for both non-blocking and blocking signals. Then again - future-wise maybe it's better to have the trait?

I will support tagging, because tagging is for easing the usage of multiple signals of the same type and a blocking signal already needs to be blocking so allowing this will not encourage overuse.

@piegamesde
Copy link
Contributor Author

I like your proposal so far (btw. I probably wouldn't make a trait for both), but it doesn't answer the question where my drawing code would go using it. And I also wonder how you want to access the actor from within a GTK callback.

@idanarye
Copy link
Owner

Oh, yea, sorry.

WoAB's trick is that it cranks the Tokio runtime during a GTK idle callback. What I'm planning to do is do the same during these signal handlers. I'll make these signals messages because Actix allows you to wait for a message to be handled. So if I send a message and block_on it inside the signal handler, I can ensure that the message was fully handled while the signal was active - and this will allow to put the drawing code inside the actix::Handler.

@piegamesde
Copy link
Contributor Author

Oh, I see. If we do this anyways, can we please also do this for the inhibits? I mean it works, but it would be a lot more convenient to have the code that belongs together in one place.

@idanarye
Copy link
Owner

I might be prematurely optimizing here, but I'm a bit reluctant from using this for inhibits because that would mean using it for every signal that uses inhibit. This means we are potentially cranking the Tokio runtime needlessly many times for each iteration of the GTK event loop, and even if most of these will be eventless - they could still induce slowness due to the constant overhead and/or rechecking of wakers (not sure they are all always checked though)

So... before we move there, I'd rather have a good benchmark application (or even just example) to determine how much this affects performance.

@piegamesde
Copy link
Contributor Author

Well, we can always start with measuring the overhead for the draw commands and then decide if it is worth adding for all others.

I can see that doing this will introduce some potential latency into the event loop, but to be honest I don't really understand why this would result in an overall performance overhead.

Normal operation as I understand it: GTK has an event -> calls the signal handler -> signal handler pushes an event into the stream -> GTK calls the idle callback -> tokio processes the event from the stream.

With the proposed change: Gtk has an event -> calls the signal handler -> signal handler calls tokio -> tokio processes the event -> GTK goes back to business as usual.

@idanarye
Copy link
Owner

I'm mostly worried about signals that fire multiple times during one iteration, cranking the Tokio runtime many times instead of accumulating all the signals in the streams and cranking it only once.

But the more I talk about it the more I think it really is a premature optimization. The only signal I can think of that's going to be called every frame is draw, and any draw implementation is probably heavy enough to dwarf the Tokio overhead.

This leaves me with one concern - can anything done inside a GTK signal trigger another GTK signal?

@piegamesde
Copy link
Contributor Author

can anything done inside a GTK signal trigger another GTK signal?

Sure. queue_draw is the first example that comes to my mind, but almost any operation that touch the widgets are probably going to at least fire some property notifiers. But I'm not sure if that poses any problems: GTK has an event -> calls the signal callback -> calls the Tokio enging -> calls the WoAB signal handler -> pushes new event on to the main loop queue -> (signal callback is terminated now; back in main loop) GTK has an event -> etc.

  • We indeed lose the ability to batch process any events. I can't tell how much overhead this causes, and in how many of the cases there would have been that opportunity in the first place.
  • The order in which some of the events are processed is slightly changed. I am not concerned about this, as we don't really make any guarantees anyways. The order of events that strictly depend on each other is still held, and that's what matters IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants