At first, we need to setup the environment.
Anaconda is too fat for me to setup, so I choose the minimal installation
Go https://docs.conda.io/en/latest/miniconda.html
Download python3.8 for Linux, because the project is supported below 3.8.
After installation, re-open your terminal.
You will get the base of conda environment.
(base) user@user-machine:~$
git clone https://github.com/deepfakes/faceswap.git
And it's easy to setup the project
python setup.py
You will get the notification about some of options
OK! almost done.
Here, we need to prepare the data set for model training.
so first step, Extract
I prepare two sources of video and photo, and put them into the path below respectively
- faceswap/src/example1
- faceswap/src/example2
Command line:
# extract photos
python faceswap.py extract -i ~/faceswap/src/example1 ~/faceswap/faces/example1
# extract videos
python faceswap.py extract -i ~/faceswap/src/ex1.mp4 ~/faceswap/faces/example1
Start to train our model.
python faceswap.py train -A ~/faceswap/faces/example1 -B ~/faceswap/faces/example2 -m ~/faceswap/ex1_ex2_model/ -p
-p option to preview the process of training
You could stop the training process in any time, but recommend that the faces between A and B are clearly showed at preview screen (Means high accuracy)
Converting
Now, we can convert the video to test the model is good or not.
python faceswap.py convert -i faceswap/src/ex1.mp4 -o ~/faceswap/converted/ -m ~/faceswap/ex1_ex2_model/
After this command, you will get a lot of photos under "converted" folder.
To generate video via these photos.
ffmpeg -i video-frame-%0d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4
Record my command:
# because my photos name: Adi_000001.png ~ Adi_000171.png, so use %3d to present
ffmpeg -i Adi_000%03d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4
CY