×
you are viewing a single comment's thread.

view the rest of the comments →

[–]Ending_Credits 1 point2 points  (3 children)

Fine tuning on a small dataset (in this case 500 images) seems to work really well. Retrained my model for an extra 'tick' on Zuihou and got these results

Samples

https://i.imgur.com/lhKbMky.jpg

Some morphin:

https://i.imgur.com/rhedp4l.mp4

More morphin:

https://i.imgur.com/sCn11bE.mp4

[–]gwern 2 points3 points  (2 children)

I'm impressed just 500 images works that well. By 500, you mean 500 originals? If so, perhaps you could use aggressive data augmentation to improve the finetuning. (Or the final face StyleGAN model.)

I have a ghetto data augmentation script using ImageMagick & parallel which appears to work well:

dataAugment () {
    image="$@"
    target=$(basename "$@" | cut -c 1-200) # avoid issues with filenames so long that they can't be appended to
    suffix="png"
    # nice convert -flop                          "$image" "$target".flipped."$suffix"
    nice convert -background black -deskew 50                     "$image" "$target".deskew."$suffix"
    nice convert -fill red -colorize 3%        "$image" "$target".red."$suffix"
    nice convert -fill orange -colorize 3%     "$image" "$target".orange."$suffix"
    nice convert -fill yellow -colorize 3%     "$image" "$target".yellow."$suffix"
    nice convert -fill green -colorize 3%      "$image" "$target".green."$suffix"
    nice convert -fill blue -colorize 3%       "$image" "$target".blue."$suffix"
    # nice convert -fill purple -colorize 3%     "$image" "$target".purple."$suffix"
    nice convert -adaptive-sharpen 4x2          "$image" "$target".sharpen."$suffix"
    nice convert -brightness-contrast 10        "$image" "$target".brighter."$suffix"
    # nice convert -brightness-contrast -10       "$image" "$target".darker."$suffix"
    # nice convert -brightness-contrast -10x10    "$image" "$target".darkerlesscontrast."$suffix"
    nice convert +level 3%                     "$image" "$target".contraster."$suffix"
    # nice convert -level 3%\!                   "$image" "$target".lesscontrast."$suffix"
  }
export -f dataAugment
find . type f | parallel dataAugment

[–]Ending_Credits 2 points3 points  (1 child)

No data augmentation beyond the standard mirror used during training. My dataset is split into folders by character (500 images from each of the top 500 character tags, although in practice it tends to be 200-400 due to face detection failure). I just grab one or more of those folders, remake the dataset, and then train for one more tick (60k iterations).

More samples:

Saberfaces (about 4000 mages)

https://i.imgur.com/Q65jElX.mp4

Louise Francoise (just 350 images)

https://i.imgur.com/ouGdWbu.mp4

[–]gwern 0 points1 point  (0 children)

I'm not surprised those work (or that you got so many Sabers out). If Asuka & Holo work, why not them? Data augmentation would probably allow better results from training longer before you get artifacts from overfitting.