×
Dismiss this pinned window
all 7 comments

[–]neurokinetikz[S] 5 points6 points  (0 children)

This is Nvidia's StyleGAN network, which was recently open-sourced:

https://github.com/NVlabs/stylegan

It's training on the Oxford Visual Geometry Group Flowers 102 Dataset:

http://www.robots.ox.ac.uk/~vgg/data/flowers/

And you may know StyleGAN as the AI used to generate fake human faces here:

https://thispersondoesnotexist.com/

It's basically an AI that dreams up infinite variations on a subject that it has been trained on. In this case, flowers :)

[–]lessthanoptimal 4 points5 points  (0 children)

Could you explain a bit more what this is? Thanks!

[–]shebbbb 1 point2 points  (1 child)

What is it that is being interpolated between?

[–]neurokinetikz[S] 2 points3 points  (0 children)

it’s interpolating between the learned representations of the flowers from the dataset ... compression from upload kills the quality :)

[–]Ashar7 0 points1 point  (1 child)

Do you know how long did it take to train and on which GPU?

[–]neurokinetikz[S] 1 point2 points  (0 children)

i have 2 RTX 2080ti’s. it took about 4 days

[–]ludwig_eduard 0 points1 point  (0 children)

How did you get the generated images to transition so smoothly? It looks amazing. Did you use some video-editing tool or are you gradually changing some input/parameter? The only way I can think of possibly doing something like that is to mix styles of the current image and the next image and gradually decrease the index of dlatent transfer layers as to go from fine to coarse transfer of styles? I vaguely remember the authors doing something similar in their video presentation but I couldn't find out how they did it. Please tell me how you managed this!