Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Padding transform #133

Merged
merged 12 commits into from
Apr 20, 2022
Merged

Padding transform #133

merged 12 commits into from
Apr 20, 2022

Conversation

shahules786
Copy link
Contributor

Added padding transform #91

@iver56
Copy link
Collaborator

iver56 commented Apr 11, 2022

Thanks for the PR and for the patience :) Reviewing this is still on my TODO list

@iver56 iver56 self-requested a review April 11, 2022 10:34
@iver56
Copy link
Collaborator

iver56 commented Apr 11, 2022

I see two test failures on my end (when running on a CUDA-capable machine). Only the first of them is related to this PR

______________________________________________________________ TestPadding.test_Padding_cuda _______________________________________________________________

self = <tests.test_padding.TestPadding testMethod=test_Padding_cuda>

    @pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA")
    def test_Padding_cuda(self):

        audio_samples = torch.rand(
            size=(2, 2, 32000), dtype=torch.float32, device=torch.device("cuda")
        )
        augment = Padding(min_fraction=0.2, max_fraction=0.5, p=0.5, output_type="dict")
        padded_samples = augment(audio_samples).samples

        self.assertEqual(audio_samples.shape, padded_samples.shape)
>       assert_almost_equal(padded_samples[..., -6400:].numpy(), np.zeros((2, 2, 6400)))
E       TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

tests\test_padding.py:134: TypeError
_________________________________________________________ TestRandomCrop.test_crop_on_device_cuda __________________________________________________________

self = <tests.test_random_crop.TestRandomCrop testMethod=test_crop_on_device_cuda>

    @pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA")
    def test_crop_on_device_cuda(self):

        samples = torch.rand(
            size=(8, 2, 32000), dtype=torch.float32, device=torch.device("cuda")
        )
        sampling_rate = 16000
        crop_to = 1.5
        desired_samples_len = sampling_rate * crop_to
>       Crop = RandomCrop(
            max_length=crop_to, sampling_rate=sampling_rate, output_type="dict"
        )
E       TypeError: __init__() got an unexpected keyword argument 'output_type'

tests\test_random_crop.py:38: TypeError

Copy link
Collaborator

@iver56 iver56 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't listened to the outputs yet. I promise to do that later though 😄

tests/test_padding.py Outdated Show resolved Hide resolved
@shahules786
Copy link
Contributor Author

Hi @iver56, I have updated the PR.

@iver56
Copy link
Collaborator

iver56 commented Apr 13, 2022

But the cuda test is still broken. Should I fix it?

@shahules786
Copy link
Contributor Author

shahules786 commented Apr 14, 2022

@iver56 Oops, my bad. Must be fixed now. Since I don't have GPU in my local machine, it skips Cuda tests every time.

@shahules786 shahules786 requested a review from iver56 April 14, 2022 14:45
@iver56
Copy link
Collaborator

iver56 commented Apr 15, 2022

Ah, I see. Thanks for the fix 👍

I'm on vacation these days, so response will be slow until at least tuesday. Thanks for your patience :)

Copy link
Collaborator

@iver56 iver56 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! It only does zero padding for now, so we could add support for wrap and reflect later

Thanks for the contribution 😄

@iver56 iver56 merged commit df57948 into asteroid-team:master Apr 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants