Animate Images using AI with Frame Interpolation for Large Motion
A quick look at the Frame Interpolation for Large Motion (FILM) machine learning model for interpolating one or more images between two endpoint images. This can be used to create short animations that can be stitched together from output video files, or individual frames can assembled into videos.
Demo animation:
https://www.youtube.com/watch?v=AfFP8xZc1t4
Frame Interpolation for Large Motion Github:
https://github.com/google-research/frame-interpolation
Paths to add:
INSTALL_PATH\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
INSTALL_PATH\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp
INSTALL_PATH\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include
INSTALL_PATH\NVIDIA GPU Computing Toolkit\CUDA\v11.2\extras\CUPTI\lib64
INSTALL_PATH\NVIDIA GPU Computing Toolkit\CUDA\v11.2\cuda\bin
Tensorflow test:
pip install --ignore-installed --upgrade tensorflow==2.6.0
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Run interpolation:
python -m frame_interpolation.eval.interpolator_cli \
--pattern "frame_interpolation/photos" \
--model_path pretrained_models/film_net/Style/saved_model \
--times_to_interpolate 6 \
--output_video