You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So currently there is an issue where, if the connection stalls for some reason, that the python process can inflate in memory usage. It will do so until either the connection recovers or a MemoryError is thrown by the system. I'm attempting to resolve this by catching the error during the download so that it gets handled.
Currently in testing I have it so that it will catch the exception twice at most in the same run. The first time it will attempt to reset progress and restart. There is a flush in the progress function so theoretically that should flush out excess memory usage. If the exception happens a second time it gives up. The exception is still caught however it doesn't restart progress and opts to break execution instead.
There is another issue (found while retrieving Web Ghost Pipopa Episode 39) where a show may have a duplicate subtitle entry. What appears to be the case is when a typo is corrected it adds a new entry and the old one is never removed. I did get to see both versions and there was a single character difference that looks like a typo.
Currently when downloading/decryption it ends up with the most recent revision however it ends up with a duplicate entry in the mkv and there is a file not found error thrown at the end during cleanup due to this.
I've added exception catches for WindowsError and OSError during cleanup so that it can move on (it does still give a cleaner version of the error for review) and during mkvmerge I have it checking if the file is already in the currently building command line. If it is then it just continues on to the next iteration and avoids adding a duplicate into the mkv.
This part has been tested fully on the test episode in question. It is assumed that it should work for multi-lingual setups although I don't know any.
The text was updated successfully, but these errors were encountered:
Also got an ssl.SSLError for a timeout. Since timeouts are handled already I want to try to handle this one directly. I don't know what the error code is though so I'm hoping to get this if it happens again.
No. I tried to clear it up but it still showed up every now and again. When I tried to set up a "catch and restart" loop it didn't actually clear the memory and got stuck in an infinite loop. Eventually, I gave up on the issue.
So currently there is an issue where, if the connection stalls for some reason, that the python process can inflate in memory usage. It will do so until either the connection recovers or a MemoryError is thrown by the system. I'm attempting to resolve this by catching the error during the download so that it gets handled.
Currently in testing I have it so that it will catch the exception twice at most in the same run. The first time it will attempt to reset progress and restart. There is a flush in the progress function so theoretically that should flush out excess memory usage. If the exception happens a second time it gives up. The exception is still caught however it doesn't restart progress and opts to break execution instead.
There is another issue (found while retrieving Web Ghost Pipopa Episode 39) where a show may have a duplicate subtitle entry. What appears to be the case is when a typo is corrected it adds a new entry and the old one is never removed. I did get to see both versions and there was a single character difference that looks like a typo.
Currently when downloading/decryption it ends up with the most recent revision however it ends up with a duplicate entry in the mkv and there is a file not found error thrown at the end during cleanup due to this.
I've added exception catches for WindowsError and OSError during cleanup so that it can move on (it does still give a cleaner version of the error for review) and during mkvmerge I have it checking if the file is already in the currently building command line. If it is then it just continues on to the next iteration and avoids adding a duplicate into the mkv.
This part has been tested fully on the test episode in question. It is assumed that it should work for multi-lingual setups although I don't know any.
The text was updated successfully, but these errors were encountered: