-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make blocking interrupts configurable #271
Comments
We use eventfd for that:
The read will not block but return an error if nothing is available. In this case the thread will yield, but of course in general this is a "busy wait". This part is optimized for low latency. If you are fine with higher latencies you can change: tapasco/runtime/libtapasco/src/pe.rs Line 137 in 8b21b7f
interrupt: Interrupt::new(completion, interrupt_id, true).context(ErrorInterrupt)?,
This should actually be part of the configuration file tbh, so a user can easily change between the modes. |
Thank you for the quick reply! One follow up question: Does this influence the speed at which the PE is processing its input? |
No, only the speed at which the interrupt is noticed by the host. So for long running PEs this is less of an issue. Check out slide 12 and 13 at https://www.mcs.anl.gov/events/workshops/ross/2020/slides/ross2020-heinz.pdf The "Original Runtime" has basically the same performance as the Rust runtime with blocking waits. |
Out of curiosity: Did you compare the power consumption of busy wait and interrupt-based wait strategies? I am wondering how much power is used up by the PCIe-IP. |
The motivation behind rebuilding everything in Rust was memory safety and ease of maintenance, was it not? |
PCIe typically uses around 30W regardless of usage. But that has nothing to do with the busy waiting. The busy waiting itself only affects the power usage of the host itself. There is no polling over PCIe, but simply polling the eventfd status flag. It mainly avoids that the CPU goes to deep sleep states which results in very long and unpredictable latencies. If you don't care about those, you can simply switch to blocking waits.
Yes, pretty much. It is much easier to extend, has a better error handling, comes with a huge selection of easy to use libraries and is at least as fast in the tested scenarios. |
Thank you! 👍 |
In essence, I am using
in order to start a PE via its PEID.
tapasco_job_release
does appear to do a busy wait on one CPU core. Is this expected behavior of the runtime or is it an error on my side?The text was updated successfully, but these errors were encountered: