-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
harbour-oc-daemon: High CPU usage / I/O load after network switch #46
Comments
I wonder why this is happening, is this an invalidated/closed running connection the common code doesn't handle right yet? We do abort running sync operations in the daemon, I guess there's more needed. Will take a look at it when I find the time to do so, might not be within the next month. |
Could it be related to this being a dual-SIM device and the code assuming there can be only one^(TM) mobile connection? (Just a stab in the dark here.) Oh, and one more thing: the fact that there are 300 log entries per second I think is
Here's journald complaining about the log abuse:
I wonder if simply finding a way to not log as much would do away with the IO/CPU load issue, even without finding and fixing the underlying problem. |
If memory serves me right it must be a Qt-internal error message, so fixing the underlying problem is the way to go. It might be related to me switching back to keeping the Currently the So basically, whenever the pointer is passed through the I'll take a stab at it if you don't beat me to it. :) |
@nephros do you think you can follow the build instructions and build a copy of GhostCloud yourself using the Sailfish SDK? I don't have it installed on my machine right now. |
I also do not have the SDK/build environment set up at the moment. So the answer is yes, eventually but not in the too near future. |
So, I have successfully built the RPMs using gitlab CI. What would you have me do now I can build stuff? |
From time to time the daemon goes berserk and causes very high CPU usage.
It also spams the log/journal, causing journald to show high load as well.
This of course causes tremendous battery drain (up to 20%/h according to sysmon).
Stopping/restarting the daemon does fix the CPU load for the daemon, but journald still seems confused after doing that.
Restarting journald as well puts things back to normal.
Environment:
Steps to reproduce:
At least I can trigger it sometimes using this procedure.
Journal is then filled with the following, about 300 times per second (!):
Top shows this:
The text was updated successfully, but these errors were encountered: