OpenCodePapers

image-generation-on-imagenet-64x64

Image Generation
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeFIDBits per dimNFEInception ScoreKIDModelNameReleaseDate
Self-Improving Diffusion Models with Synthetic Data0.92126SIMS2024-08-29
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator✓ Link0.9763EDM2-S+DDO2025-03-03
Uni-Instruct: One-step Diffusion Model through Unified Diffusion Divergence Instruction1.021Uni-Instruct2025-05-27
Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step✓ Link1.111SiDA-EDM2024-10-19
Diffusion Models Are Innate One-Step Generators✓ Link1.161GDD-I2024-05-31
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher✓ Link1.21176.47PaGoDA2024-05-23
DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents✓ Link1.22DisCo-Diff2024-07-03
Scalable Adaptive Computation for Iterative Generation✓ Link1.23RIN2022-12-22
Diffusion Models Are Innate One-Step Generators✓ Link1.421GDD2024-05-31
Stable Consistency Tuning: Understanding and Improving Consistency Models✓ Link1.472SCT2024-10-24
Cascaded Diffusion Models for High Fidelity Image Generation1.48CDM2021-05-30
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets✓ Link1.511StyleGAN-XL2022-02-01
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation✓ Link1.5241SiD2024-04-05
Truncated Consistency Models1.622TCM2024-10-18
Consistency Models Made Easy✓ Link1.672ECM-XL2024-06-20
Constant Acceleration Flow✓ Link1.69262.03CAF2024-11-01
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion✓ Link1.73264.29CTM2023-10-01
Diffusion Models Beat GANs on Image Synthesis✓ Link2.07ADM (dropout)2021-05-11
Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling✓ Link2.1678.7LEGO2023-10-10
Normalizing Flows are Capable Generative Models✓ Link2.92.99TarFlow2024-12-09
Improved Denoising Diffusion Probabilistic Models✓ Link2.923.53Improved DDPM2021-02-18
Improving the Training of Rectified Flows✓ Link3.642-rectified flow++ (NFE=2)2024-05-30
Improving the Training of Rectified Flows✓ Link4.312-rectified flow++ (NFE=1)2024-05-30
Consistency Models✓ Link4.702CD (Diffusion + Distillation, NFE=2)2023-03-02
Consistency Models✓ Link6.201CD (Diffusion + Distillation, NFE=1)2023-03-02
Consistency Models✓ Link11.12CT (Direct Generation, NFE=2)2023-03-02
Consistency Models✓ Link13.01CT (Direct Generation, NFE=1)2023-03-02
Flow Matching for Generative Modeling✓ Link14.453.31FM2022-10-06
CLR-GAN: Improving GANs Stability and Quality via Consistent Latent Representation and Reconstruction✓ Link20.27CLR-GAN2024-09-30
Partition-Guided GANs✓ Link21.73PGMGAN2021-04-02
Composing Ensembles of Pre-trained Models via Iterative Consensus29.18434.9523.766GLIDE + CLIP + CLS + CLS-FREE2022-10-20
Composing Ensembles of Pre-trained Models via Iterative Consensus29.21925.9265.325GLIDE + CLS-FREE2022-10-20
Composing Ensembles of Pre-trained Models via Iterative Consensus30.46225.0176.174GLIDE + CLIP2022-10-20
Composing Ensembles of Pre-trained Models via Iterative Consensus30.87122.077GLIDE + CLS2022-10-20
Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling✓ Link3.2NFDM2024-04-19
Generative Modeling with Bayesian Sample Inference✓ Link3.22BSI2025-02-11
Efficient-VDVAE: Less is more✓ Link3.30 (different downsampling)Efficient-VDVAE2022-03-25
Densely connected normalizing flows✓ Link3.35 (different downsampling)DenseFlow-74-102021-06-08
Neural Diffusion Models3.35NDM2023-10-12
Variational Diffusion Models✓ Link3.40VDM2021-07-01
Combiner: Full Attention Transformer with Sparse Computation Cost✓ Link3.42Combiner-Axial2021-07-12
Efficient Content-Based Sparse Attention with Routing Transformers✓ Link3.43Routing Transformer2020-03-12
Generating Long Sequences with Sparse Transformers✓ Link3.44Sparse Transformer 59M (strided)2019-04-23
Multi-Resolution Continuous Normalizing Flows✓ Link3.44MRCNF2021-06-15
Hierarchical Transformers Are More Efficient Language Models✓ Link3.44Hourglass2021-10-26
Combiner: Full Attention Transformer with Sparse Computation Cost✓ Link3.504Combiner-Mixture2021-07-12
Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling3.52SPN2018-12-04
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images✓ Link3.52Very Deep VAE2020-11-20
PixelCNN Models with Auxiliary Variables for Natural Image Modeling3.57PixelCNN2016-12-24
Conditional Image Generation with PixelCNN Decoders✓ Link3.57Gated PixelCNN (van den Oord et al., [2016c])2016-06-16
Rethinking Attention with Performers✓ Link3.636Performer (12 layers)2020-09-30
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design✓ Link3.69Flow++2019-02-01
MaCow: Masked Convolutional Generative Flow✓ Link3.69MaCow (Var)2019-02-12
Parallel Multiscale Autoregressive Density Estimation3.7Parallel Multiscale2017-03-10
MALI: A memory efficient and reverse accurate integrator for Neural ODEs✓ Link3.71MALI2021-02-09
Reformer: The Efficient Transformer✓ Link3.710Reformer (12 layers)2020-01-13
Rethinking Attention with Performers✓ Link3.719Performer (6 layers)2020-09-30
Reformer: The Efficient Transformer✓ Link3.740Reformer (6 layers)2020-01-13
MaCow: Masked Convolutional Generative Flow✓ Link3.75MaCow (Unf)2019-02-12
Residual Flows for Invertible Generative Modeling✓ Link3.757Residual Flow2019-06-06
Glow: Generative Flow with Invertible 1x1 Convolutions✓ Link3.81Glow (Kingma and Dhariwal, 2018)2018-07-09
Axial Attention in Multidimensional Transformers✓ Link4.032Axial Transformer (6 layers)2019-12-20
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting✓ Link4.351Logsparse (6 layers)2019-06-29
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion✓ Link170.38CTM (NFE 1)2023-10-01
Composing Ensembles of Pre-trained Models via Iterative Consensus7.952GLIDE +CLS2022-10-20