Setting different values for min_scene_len results in the same content being divided into different scenes #469
-
Using the default value of min_scene_len, the following results are obtained:
min_scene_len is set to 2s (2 * video.frame_rate), and the following results are obtained:
video.frame_rate is 20.0 |
Beta Was this translation helpful? Give feedback.
Answered by
Breakthrough
Dec 31, 2024
Replies: 2 comments 2 replies
-
What detector are you using? There is a known issue for AdaptiveDetector in #408. |
Beta Was this translation helpful? Give feedback.
2 replies
-
I understand the problem, thank you very much. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Any time the
min_scene_len
parameter is enforced, the results will differ since PySceneDetect needs to determine what to do. In previous versions (v0.6.3 and older), any cut points that occurred beforemin_scene_len
were ignored.Newer versions (v0.6.4+) allow you to customize what happens with the
filter_mode
param. By default,ContentDetector
now merges cuts shorter thanmin_scene_len
with the preceding scene. This greatly improves performance on videos with lots of flashing.You can still get the old behaviour by setting the
filter_mode
param ofContentDetector
toFlashFilter.Mode.SUPPRESS
, e.g.: