The co-author of styleGAN from @NvidiaAI just gave a talk now in @KAUST_News about their latest GAN . The introduced latent at different layers aims to force the discriminator to learn more variation . And that what made different from progressiveGAN
They even removed the traditional input and replaced if with a constant tensor.
Another thing I find amazing is that it looks like a good chunk of the FID gain from ProGAN comes from hyperparameter tuning (8.04 > 5.25 on FFHQ)