Training LoRA and GLoRA on SD 1.5 & XL with the Prodigy Optimizer using the Kohya_SS scripts.
In today's video I look at training LoRA and GLoRA adapters for Stable Diffusion 1.5 and XL using the Prodigy optimizer on a large and varied dataset made up of 16 characters. Then I show an example of how you can fine tune an existing adapter with a new dataset to kick start your training.
[00:00] Intro, what is covered in the video
[00:34] Overall goal of the training I'm doing here
[01:44] WSL Training VS Windows 10 training
[02:45] Hardware specs used
[03:10] Bing Image Downloader for image scraping/dataset building
[03:30] DupeGuru for file deduplication
[04:20] BIRME and locally hosted BIRME clone
[06:00] Description of the dataset used for fine tuning
[11:30] Stable Diffusion 1.5 LoRA results using the Prodigy optimizer
[15:00] Stable Diffusion 1.5 GLoRA results using the Prodigy optimizer
[17:00] Stable Diffusion XL LoRA results using the Prodigy optimizer & How to train SDXL LoRA on 12GB
[19:00] Speed up training: Using an existing LoRA as a base to fine tune another dataset. Training Alice in Wonderland on top of the comic adapter.