Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

would like to ask where can I download data to train / test pMLP-Mixer? #1

Open
tiendung opened this issue Feb 27, 2022 · 7 comments
Open

Comments

@tiendung
Copy link

Didn't find a clue to get the datasets, so I ask here. It's not an issue related to the implementation.

@tonyswoo
Copy link
Contributor

Hello,

The three datasets I used to evaluate my implementation are the MTOP dataset, the multilingual ATIS dataset, and the IMDB dataset.

You can download the MTOP dataset here. The IMDB dataset can also be easily downloaded here. As for the multilingual ATIS dataset, getting access to the dataset is a bit more difficult; you need to create an LDC account, request the dataset, and wait for the request to approved (this might be a manual process). The multilingual ATIS catalogue page is here.

If you have further questions, feel free to add a comment.

@tiendung
Copy link
Author

tiendung commented Feb 28, 2022

I tried several times but still cannot figure out how to run training script on imdb dataset.
I got the following error:

t@medu pnlp-mixer % python3 run.py -c cfg/imdb_xs.yml -n imdb_xs -m train
  File "/Users/t/repos/pnlp-mixer/run.py", line 167, in <module>
    data_module = PnlpMixerDataModule(cfg.vocab, train_cfg, model_cfg.projection)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 20, in __init__
    self.tokenizer = BertWordPieceTokenizer(**vocab_cfg.tokenizer)
  File "/usr/local/lib/python3.9/site-packages/tokenizers/implementations/bert_wordpiece.py", line 30, in __init__
    tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(unk_token)))
Exception: Error while initializing WordPiece: No such file or directory (os error 2)t@medu pnlp-mixer %

I download and put imdb dataset at ./data/imdb

t@medu pnlp-mixer % ll ./data/imdb
.rw-r--r-- t staff 826 KB Wed Apr 13 00:14:11 2011  imdb.vocab
.rw-r--r-- t staff 882 KB Sun Jun 12 05:54:43 2011  imdbEr.txt
.rw-r--r-- t staff 3.9 KB Sun Jun 26 07:18:03 2011  README
drwxr-xr-x t staff 224 B  Wed Apr 13 00:22:40 2011  test/
drwxr-xr-x t staff 320 B  Sun Jun 26 08:09:11 2011  train/

Can you give some hints?

@tiendung tiendung reopened this Feb 28, 2022
@tonyswoo
Copy link
Contributor

Hi,

Could you show me the configuration file (the .yml file) you are using?

I believe the issue is that the vocab file does not exist in the provided path i.e. the vocab file does not exist at the path provided in vocab.tokenizer.vocab of the configuration file.

If you wish to use the multilingual BERT vocabulary, the file is included in the repo at ./wordpiece/mbert_vocab.txt

@tiendung
Copy link
Author

You are right. I need to change the config to mbert_vocab.txt.

@tiendung
Copy link
Author

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

@tiendung tiendung reopened this Feb 28, 2022
@tonyswoo
Copy link
Contributor

tonyswoo commented Mar 2, 2022

Hi,

Which version of tokenizers are you using?

@zzk0
Copy link

zzk0 commented Nov 3, 2022

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

I have the same problem, the command below will fix it.

pip install tokenizers==0.11.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants