Skip to content

[NOMRG] Try build and tests on ffmpeg6 #8211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

NicolasHug
Copy link
Member

@NicolasHug NicolasHug commented Jan 15, 2024

Trying this now that #8096 is merged

Copy link

pytorch-bot bot commented Jan 15, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/8211

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (6 Unrelated Failures)

As of commit b33e843 with merge base a00a72b (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@NicolasHug
Copy link
Member Author

Dataset tests are segfaulting, probably for the same (unknkown) reasons as in #7296. I assume it comes from the video datasets. I can't reproduce the segfault locally but with ffmpeg6 and pyav installed I'm getting errors like

__________________________________________ UCF101TestCase.test_num_examples ___________________________________________
test/datasets_utils.py:555: in test_num_examples
    with self.create_dataset(config) as (dataset, info):
../.miniconda3/envs/ffmpeg6/lib/python3.10/contextlib.py:135: in __enter__
    return next(self.gen)
test/datasets_utils.py:378: in create_dataset
    info = self._inject_fake_data(tmpdir, complete_config) if inject_fake_data else None
test/datasets_utils.py:482: in _inject_fake_data
    info = self.inject_fake_data(tmpdir, config)
test/test_datasets.py:871: in inject_fake_data
    video_files = self._create_videos(video_folder)
test/test_datasets.py:883: in _create_videos
    video_files = [
test/test_datasets.py:884: in <listcomp>
    datasets_utils.create_video_folder(root, cls, lambda idx: file_name_fn(cls, idx), num_examples_per_class)
test/datasets_utils.py:123: in inner_wrapper
    return fn(*args, **kwargs)
test/datasets_utils.py:940: in create_video_folder
    return [
test/datasets_utils.py:941: in <listcomp>
    create_video_file(root, file_name_fn(idx), size=size(idx) if callable(size) else size, **kwargs)
test/datasets_utils.py:123: in inner_wrapper
    return fn(*args, **kwargs)
test/datasets_utils.py:890: in create_video_file
    torchvision.io.write_video(str(file), video.permute(0, 2, 3, 1), fps, **kwargs)
torchvision/io/video.py:91: in write_video
    stream = container.add_stream(video_codec, rate=fps)
av/container/output.pyx:62: in av.container.output.OutputContainer.add_stream
    ???
av/codec/codec.pyx:179: in av.codec.codec.Codec.__cinit__
    ???
av/codec/codec.pyx:187: in av.codec.codec.Codec._init
    ???
E   av.codec.codec.UnknownCodecError: libx264

When pyav is not installed, all those tests are skipped as per the skip mark.

@NicolasHug
Copy link
Member Author

After a0ff08b which ignores the dataset tests, there are still segfaults, this time in test_decode_jpeg_error() which seems completely unrelated 🤷

@NicolasHug
Copy link
Member Author

NicolasHug commented Jan 17, 2024

Tried to avoid the segfault by only running the video tests. They're all being skipped because the extension cannot be loaded

try:
_load_library("video_reader")
_HAS_VIDEO_OPT = True
except (ImportError, OSError):
_HAS_VIDEO_OPT = False

Printing the exception gives:

/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /opt/conda/envs/ci/lib/./libx265.so.199)

(torchvision is still built with ffmpeg support)

@NicolasHug
Copy link
Member Author

Won't have time to dig this further for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants