同步操作将从 mirrors_datitran/face2face-demo 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
This is a pix2pix demo that learns from facial landmarks and translates this into a face. A webcam-enabled application is also provided that translates your face to the trained face in real-time.
# Clone this repo
git clone git@github.com:datitran/face2face-demo.git
# Create the conda environment from file (Mac OSX)
conda env create -f environment.yml
python generate_train_data.py --file angela_merkel_speech.mp4 --num 400 --landmark-model shape_predictor_68_face_landmarks.dat
Input:
file
is the name of the video file from which you want to create the data set.num
is the number of train data to be created.landmark-model
is the facial landmark model that is used to detect the landmarks. A pre-trained facial landmark model is provided here.Output:
original
and landmarks
will be created.If you want to download my dataset, here is also the video file that I used and the generated training dataset (400 images already split into training and validation).
# Clone the repo from Christopher Hesse's pix2pix TensorFlow implementation
git clone https://github.com/affinelayer/pix2pix-tensorflow.git
# Move the original and landmarks folder into the pix2pix-tensorflow folder
mv face2face-demo/landmarks face2face-demo/original pix2pix-tensorflow/photos
# Go into the pix2pix-tensorflow folder
cd pix2pix-tensorflow/
# Resize original images
python tools/process.py \
--input_dir photos/original \
--operation resize \
--output_dir photos/original_resized
# Resize landmark images
python tools/process.py \
--input_dir photos/landmarks \
--operation resize \
--output_dir photos/landmarks_resized
# Combine both resized original and landmark images
python tools/process.py \
--input_dir photos/landmarks_resized \
--b_dir photos/original_resized \
--operation combine \
--output_dir photos/combined
# Split into train/val set
python tools/split.py \
--dir photos/combined
# Train the model on the data
python pix2pix.py \
--mode train \
--output_dir face2face-model \
--max_epochs 200 \
--input_dir photos/combined/train \
--which_direction AtoB
For more information around training, have a look at Christopher Hesse's pix2pix-tensorflow implementation.
First, we need to reduce the trained model so that we can use an image tensor as input:
python reduce_model.py --model-input face2face-model --model-output face2face-reduced-model
Input:
model-input
is the model folder to be imported.model-output
is the model (reduced) folder to be exported.Output:
Second, we freeze the reduced model to a single file.
python freeze_model.py --model-folder face2face-reduced-model
Input:
model-folder
is the model folder of the reduced model.Output:
frozen_model.pb
in the model folder.I have uploaded a pre-trained frozen model here. This model is trained on 400 images with epoch 200.
python run_webcam.py --source 0 --show 0 --landmark-model shape_predictor_68_face_landmarks.dat --tf-model face2face-reduced-model/frozen_model.pb
Input:
source
is the device index of the camera (default=0).show
is an option to either display the normal input (0) or the facial landmark (1) alongside the generated image (default=0).landmark-model
is the facial landmark model that is used to detect the landmarks.tf-model
is the frozen model file.Example:
Kudos to Christopher Hesse for his amazing pix2pix TensorFlow implementation and Gene Kogan for his inspirational workshop.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。