You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Dr. Afifi, Thanks for your great work!
I have read many of your inspiring papers about color constancy. From a private point of view, most of your papers are elegant enough and easy to understand, except the 'C5', which really confused me a lot.
I do not know if I have some misunderstanding of related learning such as Transfer learning or Transductive learning. I just could not understand the core training and testing processing of your method.
I have three questions below:
From your abstract, it seems that you only use the additional labels image at test time.
C5 approaches this problem through the lens of transductive inference: additional unlabeled images are provided as input to the model at test time
However, you also highlight that even in the training process, the model is trained on labeled and unlabeled images……
Our system is trained using labeled (and unlabeled) images from multiple cameras, but at test time our model is able to look at a set of the (unlabeled) test set images from a new camera.
I am not sure if I understand the meaning of query image and additional image in your training and testing procedure.
For instance, if you use sensor1's image as training and testing on sensor2
Training
The query image represents one of an image with a label from sensor1 and the additional image is selected from sensor1 without labels, am I right?
Testing
The query image represents one of an image with a label from sensor2 and the additional images are selected from sensor2without labels, actually, just the same as the Training process except for the dataset, am I right?
If so, what the paper said that did not use any labels which are biased for the truth
In contrast, our technique requires no ground-truth labels for the unseen camera, and is essentially calibration-free for this new sensor.
If not so, I am confused that how the model could be updated with its parameters without labels in the testing process.
The leave-one-out evaluation approach
For camera-specific, it is clear that leave-one-out means you use n-1 images as training and the left 1 image as testing, looping it n times. However, I do not understand what the paper which focuses on cross-sensor, said here:
we adopt a leave-one-out cross-validation evaluation approach: for each dataset, we exclude all scenes and cameras used by the test set from our training images. For a fair comparison with FFCC [13], we trained FFCC using the same leave-one-out cross-validation evaluation approach.
Can you describe the detail of the leave-one-out method here, for instance, how did you re-train the FFCC method using such method?
The three above questions may be connected with each other.
I will be very grateful for your reply~
The text was updated successfully, but these errors were encountered:
Bright-Shawn
changed the title
what is query image and additional images meant for?
Some questions about the Training and Testing details
Sep 21, 2022
Thanks for your interest in the work and your questions.
1. From your abstract, it seems that you only use the additional labels image at test time. Yes the abstract mentioned that C5 approaches this problem through the lens of transductive inference: additional unlabeled images are provided as input to the model at test time. But we haven't said the word only. That is, we use additional unlabeled images at training time as well.
2. I am not sure if I understand the meaning of query image and additional image in your training and testing procedure. Query image is the current test image, additional images are a set of additional unlabeled images captured by the same sensor -- they can be randomly selected or predefined. At test time, both query and additional images are unlabeled, while at training time and in order to compute the loss function we use labeled query image and unlabeled additional images.
3. The leave-one-out evaluation approach It is leave-one-out for camera/dataset not for testing images. Meaning we use n-1 camera sets (or datasets) for training and use the remaining one for testing.
Hi Dr. Afifi, Thanks for your great work!
I have read many of your inspiring papers about color constancy. From a private point of view, most of your papers are elegant enough and easy to understand, except the 'C5', which really confused me a lot.
I do not know if I have some misunderstanding of related learning such as Transfer learning or Transductive learning. I just could not understand the core training and testing processing of your method.
I have three questions below:
However, you also highlight that even in the training process, the model is trained on labeled and unlabeled images……
For instance, if you use sensor1's image as training and testing on sensor2
The query image represents one of an image with a label from sensor1 and the additional image is selected from sensor1 without labels, am I right?
The query image represents one of an image with a label from sensor2 and the additional images are selected from sensor2 without labels, actually, just the same as the Training process except for the dataset, am I right?
If so, what the paper said that did not use any labels which are biased for the truth
If not so, I am confused that how the model could be updated with its parameters without labels in the testing process.
For camera-specific, it is clear that leave-one-out means you use n-1 images as training and the left 1 image as testing, looping it n times. However, I do not understand what the paper which focuses on cross-sensor, said here:
Can you describe the detail of the leave-one-out method here, for instance, how did you re-train the FFCC method using such method?
The three above questions may be connected with each other.
I will be very grateful for your reply~
The text was updated successfully, but these errors were encountered: