Online AI Gallery is a show that represents images that have been reshaped and edited through multiple Artificial intelligence frameworks in order to generate awesome artifacts.You can use this project, which is a compilation of multiple Artificial intelligence frameworks for image processing and can adapt, extend, overwrite, and customize anything to your needs
- Get around 300 fake images downloaded from a library called This Person doesnt not Exist API.
- Find faces that might be duplicated.
- Face Recognition.
- Composite all face to generate one average face.
- Build a 3D modle of the average face.
- Animate the average face in a way that will moves and talks according to a recorded video or real-time.
- Cloning Voice to use it in our recorded video for presenation purposes.
- Build different layouts for presentation.
⭐ Star us on GitHub — it motivates us a lot!
- Online_AI_Gallery
- Table Of Content
- Installation
- Images
- Voice
- Animate
- Gallery
- Trouble Shooting
- License
- Want to Contribute?
- Team
First clone this repo then please follow each section's requirement to get your results.
git clone https://github.com/karmelyoei/Online_AI_Gallery.git
This section is about manipulating images. removing the faces from an image, removing any duplicate images, face recognition, combining all faces to form one average face, and animating this average face in real-time or recorded video.
Compiste faces will be created by processing a series of photos, to obtain these photos, we will utilize a library called This person does not exist API to retrieve a large number of bogus images. Take the following actions:
- Create a folder inside the src folder name it fake_images.
mkdir -p src/data/fake_images
- Download 300 images from the website through Running file getFakeFaces.py, you can customize the number of images instead of 300.
python src/getFakeFaces.py src/data/fake_images 300
The lipsyn library works well if the person opens his mouth without showing teeth, so we built an algorithm to detect whether the person in the picture opens his mouth or not, while detecting teeth is not 100% accurate and is still in the works...!
Take the following actions:
- Create folder name openMouth inside src dierctory :
mkdir /src/openMouth
. - Create folder name openMouthButTeethNotShowing inside src directory:
mkdir /src/openMouthButTeethNotShowing
. - Run file openMouth.py:
cd src && python openMouth.py
.
Because this person does not exist API pictures have a high resolution, we require low resolution photos. The lipsync library works well with low resolution images; using the PLI library, we can reduce image resolutions by reducing the number of pixels in an image without changing the dimensions or other aspects of the image; the code can be found in the file imageResolution.py.
Note : Please remember to follow the steps in the correct order.
python imageResolution.py
After obtaining a slew of fake photographs from the first phase, we can use our face composite file to create an average face from these images.
For the composite face, we go through three steps: first, we detect the facial feature, then we normalize the images to the same reference frame (600* 600), and last, we align the faces together.
We are using the dilb face recognition built with deep learning, For each facial image we calculate 68 facial landmarks using dlib.check out the Face dot map markup (what each dot represents) here!.
Perform the following actions:
-
Download the shape predictor from this link shape_predictor_68_face_landmarks.dat and save it under the folder name
src/detectors/dots_detector/
-
Create virual environment :
pip install virtualenv
-
Build virtual environment:
virtualenv env
-
Activate virtual environment
For windows:env/Script/activate
For Linux :source env/bin/activate
-
Install the requirements for this step:
pip install dlib numpy opencv-python imutils
-
Run this command to build the dot files for each image:
python src/faceCreatePoints.py \ --dat src/detectors/dots_detector shape_predictor_68_face_landmarks.dat \ --imagespath src/data/fake_images
-
Run the faceComposite file:
python src/faceComposite.py --path src/data/fake_images
-
You will find the results inside folder
./src/data/fake_images/composite
Based on 3D Face Reconstruction from a Single Image library we used the demo of this library. more info you can read the project website
Coming soon ....
Instead of using our voices, we will use various methods to clone them, convert text to speech, or generate songs for video recording.
Using Google Colab for Cloning voice.
Please Note : Before run all the commands on the Google colab, prepare an audio of your voice upload it to the colab, change the file name "trump10.wav" to your audio file name & change the text in the 6th cell to the text you wanted the voice to say it.
Coming Soon!
Coming Soon!
Thanks to Real_Time_Image_Animation, we built our bobbling face inside an image.Follow these steps to create your own!
-
Create virual environment :
pip install virtualenv
-
Build virtual environment:
virtualenv env
-
Activate virtual environment
For windows:env/Script/activate
For Linux :source env/bin/activate
-
Install the requirements by running this command:
pip install -r src/requirements/headbobble_requirements.txt
-
download the training module
vox-adv-cpk.pth.tar
from this link1 or link2 or
link3. or you can download it through this command. then save it under folder called src/checkpointscurl https://openavatarify.s3.amazonaws.com/weights/vox-adv-cpk.pth.tar \ --output src/checkpoints/vox-adv-cpk.pth.tar```
-
Prepare a recorded video of yourself with slow-motion you can use an application like cheese on your Linux or other application for recording videos.
-
Run these commands:
cd src/Real_Time_Image_Animation
python headbobble_image_animation.py \ --checkpoint ../checkpoints/vox-adv-cpk.pth.tar \ --input_image ../../doc/face_composite.png \ --input_video ../../doc/face_headbobble_1_source.mp4```
-
You can use your webcam instead of recorded video by running this command:
python image_animation.py \
--checkpoint ../checkpoints/vox-adv-cpk.pth.tar \
--input_image ../../doc/face_composite.png
- You will find the results in output folder.
Comparison of myheritage and local techniques, which shows where local technique has issues:
We can compensate for those issues by limiting the head movements:
Note: If results does not show your webcam frame you may need to manually edit the X,Y offesets at line 67. To get the size of your webcam using testWebcam.py and examine the output image in a image editor to figure out a X,Y offset that should be used in the image_animation code.
Above uses Real_Time_Image_Animation library with modifications. To rebuild Real_Time_Image_Animation from scratch:
git clone \
https://github.com/anandpawara/Real_Time_Image_Animation \
faces/Real_Time_Image_Animation
// Modify code to use CPU and create destination output file
cp headbobble_image_animation.py faces/Real_Time_Image_Animation/.
Used LipSunc library through Google Colab or using Wav2Lip library.
Follow these steps for Using GoogleColab:
- In your google Drive create two folders names "Wav2Lip" / "Wav2lip".
- Save the images and the aduio files in "Wav2Lip" folder.
- Save the training module Wav2lip + Gan in "Wav2lip" folder.
To dowlond the training module click here!.
Then run the commands in the google colab by giving the code the authorization's code for your google drive to give it access to the folders.
Follow these steps for Using Wave2Lib locally:
git clone https://github.com/Rudrabha/Wav2Lip
sudo apt-get install ffmpeg
pip install librosa tqdm==4.45.0 numba opencv-contrib-python
*** or pip install -r requirements.txt***
mkdir -p detectors/face_detector
curl https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth \
--output detectors/face_detector/s3fd.pth
cd voices/Wav2Lip
python inference.py \
--checkpoint_path ../weights/wav2lip_gan.pth \
--face "../../documentation/face_headbobble_3_destination.avi" \
--audio "../../documentation/voice_gallium_20sec.mp3"
***If result appears to lack audio***:
ffmpeg -i temp/result.avi -i temp/temp.wav -c:v copy -c:a aac temp/result_combined.avi
the results will appear in temp/ folder.
Result video including audio here and here
Different layouts for our online Gallery. Please check out the presentations files inside the layout folder. Coming soon..!
You may encounter some issues; you can find solutions to these issues in this file issues; if you cannot find the issue, you can open a new issue and we will assist you in resolving it.
Please take a look at our contributing guidelines if you're interested in helping!
Want to say Hi or buy us a coffee send us a PM! or donate here for us Thanks!.:sunglasses: :two_hearts: