Diffusion models#

Diffusion models are a class of generative models widely used in image generation and other computer vision tasks. They are at the forefront of generative AI, powering popular text-to-image tools such as Stability AI’s Stable Diffusion, OpenAI’s DALL-E (starting from DALL-E 2), MidJourney, and Google’s Imagen.

These models offer significant improvements in performance and stability over earlier architectures for image synthesis, including variational autoencoders (VAEs), generative adversarial networks (GANs), and autoregressive models like PixelCNN.

In the examples provided, you’ll explore how to apply diffusion models to various use cases.