2,255 users here now
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
Come for the cats, stay for the empathy.
and start exploring.
Removing blob artifact from StyleGAN generations without retraining. Inspired by StyleGAN2 (self.MachineLearning)
submitted 1 month ago * by stpidhorskyi
Post a comment!
[–]ogrisel 2 points3 points4 points 1 month ago (2 children)
Nice hack :)
Do you have any intuition as to why the network would try to "fool the instance normalization layer" in the first place? Is it related to the adversial training or is this an artifact of instance normalization that would appear for any deconv architecture (e.g. VAE, GLOW, Unet...) with instance normalization layers?
[–]stpidhorskyi[S] 2 points3 points4 points 1 month ago (1 child)
Thanks!
I doubt that it has something to do with adversarial learning. It is definitely tied up with instance normalization, but I'm not sure if this artifact could be found on other architectures, it might be specific to the style architecture.
My initial hypothesis that I had a while ago is that for some unknown reason, the network wants some channels to be non-zero-mean, but mostly negative (spikes that I observed are positive so that the rest will go negative after normalization). Why? I don't know. However, instance normalization will always make it zero-mean, no matter what.
This is what StyleGAN2 paper says:
We pinpoint the problem to the AdaIN operation that normalizes the mean and variance of each feature map sepa-rately, thereby potentially destroying any information found in the magnitudes of the features relative to each other. We hypothesize that the droplet artifact is a result of the generator intentionally sneaking signal strength information past instance normalization: by creating a strong, localized spike that dominates the statistics, the generator can effectively scale the signal as it likes elsewhere. Our hypothesis is sup-ported by the finding that when the normalization step is removed from the generator, as detailed below, the dropletartifacts disappear completely.
So, they say that the reason is that due to instance normalization, relative magnitude between channels is lost. Creating a spike is a way around it. Seems very plausible. Something that I find interesting:
[–]SaveUser 1 point2 points3 points 1 month ago (0 children)
This is fascinating. Thanks for the explanation and great work in the repo!
[–]ink404 0 points1 point2 points 29 days ago (0 children)
Was wondering if anyone here knows of a method to use a trained stylegan model for transferring the style to an unseen image?
π Rendered by PID 4475 on r2-app-0711813dc736a3c78 at 2020-02-15 16:34:07.255071+00:00 running 6de88fa country code: US.
Want to add to the discussion?
Post a comment!