Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_semantic_embed 出错 #12

Open
lightningsoon opened this issue Sep 20, 2024 · 1 comment
Open

get_semantic_embed 出错 #12

lightningsoon opened this issue Sep 20, 2024 · 1 comment

Comments

@lightningsoon
Copy link

python get_semantic_embed.py --model_path ./Llama-2-7b-hf --dataset BookCrossing --pooling average --gpu_id 1
miniconda3/envs/rella/lib/python3.10/site-packages/transformers/configuration_utils.py:902 in dict_torch_dtype_to_str │
│ │
│ 899 │ │ string, which can then be stored in the json format. │
│ 900 │ │ """ │
│ 901 │ │ if d.get("torch_dtype", None) is not None and not isinstance(d["torch_dtype"], s │
│ ❱ 902 │ │ │ d["torch_dtype"] = str(d["torch_dtype"]).split(".")[1] │
│ 903 │ │ for value in d.values(): │
│ 904 │ │ │ if isinstance(value, dict): │
│ 905 │ │ │ │ self.dict_torch_dtype_to_str(value)

有两个问题
1、用哪个embedding model
2、模型载入就出错了,看起来像是版本问题

@LaVieEnRose365
Copy link
Owner

我们论文中使用的是Vicuna-13b-v1.3,对于第二个问题,这个报错我们之前没有遇到过,能对应到代码哪一行报错的吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants