What is Diffusion Model?
A generative model that creates images by learning to progressively reverse a noise-adding process, enabling high-quality image synthesis.
Full Definition
Diffusion models are a class of generative models that learn to create data (typically images) by training on the task of reversing a noise-addition process. During training, Gaussian noise is iteratively added to real images across many steps until the image is pure noise; the model learns to predict and remove this noise at each step. At inference, the model starts from random noise and iteratively denoises it, guided by a text prompt or other conditioning signal, to produce a new image. Latent diffusion models (the technology behind Stable Diffusion, DALL-E 3, Midjourney, and Flux) operate in a compressed latent space rather than pixel space, making them far more efficient. Diffusion models have become the dominant paradigm for high-quality image generation, surpassing earlier GANs in diversity and controllability.
Tools that use Diffusion Model
Midjourney
The gold standard for AI image generation (v7, v8 alpha)
DALL-E
AI image generation integrated into ChatGPT
Stable Diffusion
NewOpen-source AI image generation you can run locally or in the cloud
Adobe Firefly
NewAI image generation integrated into the Adobe Creative Cloud ecosystem
Leonardo.ai
NewAI image generation with custom model training and generous free tier
Ideogram
NewBest AI image generator for accurate text rendering in images
Recraft
NewThe only AI image generator that produces native vector graphics (SVG)