Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RGB-D Washinton and iCubWorld28 features #1

Open
raffaello-camoriano opened this issue Mar 3, 2017 · 6 comments
Open

RGB-D Washinton and iCubWorld28 features #1

raffaello-camoriano opened this issue Mar 3, 2017 · 6 comments

Comments

@raffaello-camoriano
Copy link
Member

If there is enough interest, RGB-D Washinton and iCubWorld28 features can be added.

@GiuliaP

@paulojunqueira
Copy link

Hello, I am new to this area and I am studying your paper, and it is very interesting! Is there a way to test other datasets in the matlab code?
thanks

@GiuliaP
Copy link
Collaborator

GiuliaP commented Jun 20, 2019

Hello @paulojunqueira

As you probably have noticed the code was released only for the case of the MNIST dataset, but nothing prevents from adapting it to use also iCW or RGBD-Washington datasets (i.e., the ones used in the paper) or even other datasets.

I (and maybe also @raffaello-camoriano ) can try to provide some support to issues that may arise, in the case you are interested in adding them and making a contribution.

Giulia

@paulojunqueira
Copy link

Hello @GiuliaP
Thank you, I appreciate. There is a matlab file in the dataset folder called MNIST.m, Its functions is still unclear to me. Is it to prepare the mnist dataset for training? I am able to adapt it to prepare another dataset?

Could you give me some guidance on what part or function should I start adapting to use with another dataset ?

thank you.

Paulo

@GiuliaP
Copy link
Collaborator

GiuliaP commented Jun 25, 2019

@paulojunqueira yes, the code is structured with a generic dataset class, from which the specific MNIST class inherits. You can try to follow this logic, i.e., ''clone'' the MNIST class, and customize it to your own data. I think @raffaello-camoriano can confirm this.

@paulojunqueira
Copy link

paulojunqueira commented Jun 25, 2019

Thanks @GiuliaP . I have one more doubt: in the dataConf_MNIST_inc.m file, there are some variables that are used,like ntr, ntem, nLow... And no comments on the code. Is the ntr the training number for all classes? what about the nLow? I am trying to identify which one are the variables that you cite on your paper , like nbal, nimb and ntest.

thank you

@raffaello-camoriano
Copy link
Member Author

Thanks @GiuliaP . I have one more doubt: in the dataConf_MNIST_inc.m file, there are some variables that are used,like ntr, ntem, nLow... And no comments on the code. Is the ntr the training number for all classes? what about the nLow? I am trying to identify which one are the variables that you cite on your paper , like nbal, nimb and ntest.

thank you

Hi @paulojunqueira and thank you for your interest.

Here are some comments on the variables in dataConf_MNIST_inc.m which is then used to produce the datasetin main.m (line 103):

  • ntr: total number of training samples
  • nte: total number of test samples (if empty, the maximum number of available test samples are considered s.t. the test set is balanced)
  • nLow: number of samples of the underrepresented class.

The proportion of samples from the underrepresented class can also be manipulated by commenting out nLow and setting the relative factor lowFreq, depending on your needs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants