Skip to content

Commit 0a91a41

Browse files
committed
s
1 parent 7972438 commit 0a91a41

File tree

4 files changed

+53
-0
lines changed

4 files changed

+53
-0
lines changed

README.md

+3
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,9 @@
145145
* [Label Studio](./docs/ai/label-studio.md)
146146
* [PyTorch](./docs/ai/pytorch.md)
147147
* [Whisper](./docs/ai/whisper.md)
148+
* [AmyGLM](./docs/ai/amy-glm.md)
149+
* [IP_LAP](./docs/ai/IP_LAP.md)
150+
* [Sad Talker](./docs/ai/sad-talker.md)
148151

149152
## ESP8266
150153

docs/ai/IP_LAP.md

+33
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# IP_LAP
2+
3+
开源代码地址: https://github.com/Weizhi-Zhong/IP_LAP
4+
5+
## 安装依赖
6+
7+
```
8+
conda create -n iplap Python 3.7.13
9+
conda activate iplap
10+
pip install torch torchvision torchaudio // 如果不行,就在官网查看安装命令https://pytorch.org/
11+
pip install face-alignment==1.3.4
12+
pip install -r requirements.txt
13+
conda install ffmpeg
14+
```
15+
16+
Download the pre-trained models from [OneDrive](https://onedrive.live.com/?id=625AA3DEDF6AE6A%21187017&resid=625AA3DEDF6AE6A%21187017&ithint=folder&authkey=%21ACAA8wggva04ZKU&cid=0625aa3dedf6ae6a) or [jianguoyun](https://www.jianguoyun.com/p/DeXpK34QgZ-EChjI9YcFIAA), and place them to the folder test/checkpoints . Then run the inference command.
17+
18+
预训练模型备份:
19+
20+
```
21+
local_oss/IP_LAP/renderer_checkpoint.pth
22+
local_oss/IP_LAP/landmarkgenerator_checkpoint.pth
23+
```
24+
25+
The evaluation code is similar to [this repo](https://github.com/dc3ea9f/vico_challenge_baseline/tree/a282472ea99a1983ca2ce194665a51c2634a1416/evaluations).
26+
27+
## 推理
28+
29+
```
30+
CUDA_VISIBLE_DEVICES=0 python inference_single.py # 测试
31+
32+
python inference_single.py --input './test/template_video/129.mp4' --audio './upload/t1.m4a' --landmark_gen_checkpoint_path './test/checkpoints/landmarkgenerator_checkpoint.pth'
33+
```

docs/ai/amy-glm.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# AmyGLM
2+
3+
pip install openai flask transformers torch torchvision torchaudio datasets accelerate librosa soundfile sentencepiece pydub

docs/ai/sad-talker.md

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Sad Talker
2+
3+
conda create -n sadtalker python=3.8
4+
5+
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
6+
7+
conda install ffmpeg
8+
9+
pip install -r requirements.txt
10+
11+
pip install TTS
12+
13+
bash scripts/download_models.sh
14+

0 commit comments

Comments
 (0)