This code is only used to demonstrate the loop driven dual digital human on the same screen, coupled with a framework for post processing ultra division. The code uses a WAv2lip loop to drive a dual digital human on the same screen, train a high-definition model, perform streaming processing, strengthen hardware, and do not perform post processing. It can be used for real-time live streaming of dual digital humans. It is also possible to replace WAv2lip with other digital human driven projects based on this framework. Among them, videos 1-6 are pre recorded silent videos for two people, and audio 1-6 is pre prepared speaking audio.
- Upload the recorded dual video materials and audio separately to the temp in advance_ Video and input_ Audio.
- The recording of this dual digital human material is crucial, as is the audio and video cutting.
- The video processing speed is directly related to the GPU performance. If live streaming is required, it is necessary to perform streaming processing and use high-performance GPUs.
- CodeFormer has a slow speed in supergrading, and those who do not have high requirements for digital human videos can skip supergrading or switch to GFPGAN for supergrading.
- Variables are not used in the code, you need to write them yourself.
https://github.com/sczhou/CodeFormer
https://github.com/Rudrabha/Wav2Lip