Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About using TLC in super resolution #17

Open
CR7forMadrid opened this issue Mar 30, 2023 · 2 comments
Open

About using TLC in super resolution #17

CR7forMadrid opened this issue Mar 30, 2023 · 2 comments

Comments

@CR7forMadrid
Copy link

Hello, I had a lot of questions when I applied your work to the super-resolution model, such as Local_Base and so on. If possible, could you give a concrete example to show us how to apply it?

@CR7forMadrid
Copy link
Author

The main problem is how to set the base_size If I set the gt_size=192 for 4 times upsampling

@achusky
Copy link
Collaborator

achusky commented Apr 2, 2023

Hi, thank you for your interest.

To set the hyper-parameters of TLC, you can choose a base size that is 1x to 2x (default is 1.5x) of the training size (the input size during training). For instance, if you use gt_size=192 for 4x SR, the training input size is 192/4=48. Then you can use train_size=(1, 3, 48, 48) and base_size=(72, 72), which means the base size is 1.5x of the training size. Furthermore, you can adjust this parameter (e.g., 1x, 1.5x, 2x) based on your own validation data.

If you want to apply TLC to your own work, you can consider the following codes:
Apply TLC to HINet (w/ Instance Normalization) and MPRNet (w/ SE module)
Apple TLC to Restormer (w/ transposed self-attention

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants