Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve beacon processor scheduling #6291

Open
michaelsproul opened this issue Aug 22, 2024 · 2 comments
Open

Improve beacon processor scheduling #6291

michaelsproul opened this issue Aug 22, 2024 · 2 comments
Assignees
Labels
HTTP-API optimization Something to make Lighthouse run more efficiently.

Comments

@michaelsproul
Copy link
Member

michaelsproul commented Aug 22, 2024

Description

Currently Lighthouse uses a priority-based scheduling algorithm to assign work to threads in its BeaconProcessor: https://github.com/sigp/lighthouse/blob/stable/beacon_node/beacon_processor/src/lib.rs

We implemented the BeaconProcessor primarily as a way to avoid overloading the machine with too many jobs running in parallel. Its priority system mostly works fine to ensure that high-priority work gets completed ahead of low-priority work, it sometimes struggles to ever complete low priority work (starvation). If high-prio work keeps coming in, it can indefinitely postpone the processing of low-priority work.

Another issue is that the priorities are currently determined per task and follow a strict ordering. E.g. processing blocks is higher prio than processing attestations. In some cases it's unclear exactly what the task priority ordering should be and we end up having to choose somewhat arbitrarily (e.g. when testing #5481 I obtained some benefit from prioritising status messages higher in the ordering). For API requests this is a particular issue because we have two priorities for requests: P0 which are processed ahead of almost everything, and P1 which are basically bottom of the barrel and won't be run until the processor has some spare cycles.

Version

Lighthouse v5.3.0

Steps to resolve

I would like to abstract the actual scheduling logic out so we can experiment with different algorithms. This will require untangling all of the concerns that are currently mixed into the beacon processor, e.g. backfill rate limiting which gets its own special treatment here:

Some(InboundEvent::WorkEvent(event)) if enable_backfill_rate_limiting => {
match QueuedBackfillBatch::try_from(event) {
Ok(backfill_batch) => {
match work_reprocessing_tx
.try_send(ReprocessQueueMessage::BackfillSync(backfill_batch))
{
Err(e) => {
warn!(
self.log,
"Unable to queue backfill work event. Will try to process now.";
"error" => %e
);
match e {
TrySendError::Full(reprocess_queue_message)
| TrySendError::Closed(reprocess_queue_message) => {
match reprocess_queue_message {
ReprocessQueueMessage::BackfillSync(
backfill_batch,
) => Some(backfill_batch.into()),
other => {
crit!(
self.log,
"Unexpected queue message type";
"message_type" => other.as_ref()
);
// This is an unhandled exception, drop the message.
continue;
}
}
}
}
}
Ok(..) => {
// backfill work sent to "reprocessing" queue. Process the next event.
continue;
}
}
}
Err(event) => Some(event),
}
}

It might be prudent to start by just refactoring the current scheduling algorithm into a cleaner and more modular form, and then we can go about tweaking it.

I think the properties we desire from a scheduling algorithm are:

  • Prioritisation based on task type (retain some of what we have now).
  • Prioritisation based on how long a task has been waiting to run, and/or how close it is to its deadline (see https://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling).
  • High throughput? Ideally we should be able to max out the CPU if we are running N CPU-bound tasks.
  • Improved fairness.

Some of these desires are slightly in conflict, so having the ability to try multiple different backends is advantageous.

Something I'm not sure if we can handle is pre-emption (the ability to stop a task midway through execution). For blocking tasks, I don't think this is possible unless we define some sort of yield primitive which returns from the task to the scheduler. This seems like it would be hard to implement. If we could piggyback off Tokio's pre-emption (probably only for async tasks) or the OS's pre-emption (for blocking tasks), maybe we could get some interesting results.

We can start with ideas from:

Additional Info

Regarding throughput, I wrote a sample program to test the current scheduler, and we actually are capable of maxing out the CPU so long as each job runs for more than a few microseconds! I'll push this on a branch somewhere shortly.

@michaelsproul michaelsproul added optimization Something to make Lighthouse run more efficiently. HTTP-API labels Aug 22, 2024
@eserilev
Copy link
Collaborator

eserilev commented Aug 29, 2024

I'd like to take a stab at abstracting away the beacon processor scheduling logic if no one has started on this yet

@michaelsproul
Copy link
Member Author

@eserilev That would be great! Happy to pair on this a bit and throw ideas around

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
HTTP-API optimization Something to make Lighthouse run more efficiently.
Projects
None yet
Development

No branches or pull requests

2 participants