From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Reverse process as decoder
From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face
Reverse process as decoder
- [Instructor] And now that we have encoded our image into the diffusion model's latent space, we can now decode it and see if it actually reconstructs it somewhat accurately. To run the decoding process, we'll wrap everything in a decode function, just to kind of again, mirror this latent variable model API. This decode is really going to be the reverse process of the diffusion, and this is identical to what we ran before. So, when we're unconditionally generating an image with diffusion, we only run the reverse process. So, what we've seen so far is essentially running decode with something like pure noise or an encoded image encoded to t0 of 0 here. So, let's write our decode here. And for time's sake, I'm actually going to just pretty much copy what we saw before, since this whole loop is really all we need. This is exactly the reverse process of the diffusion model. The only slight change we're going to make is that, instead of taking pure noise as input, we're going to pass in…
Contents
-
-
-
-
-
-
(Locked)
Topics58s
-
(Locked)
Generation as a reversible process4m 55s
-
(Locked)
Sampling as iterative denoising4m 9s
-
(Locked)
Diffusers and the Hugging Face ecosystem6m 51s
-
(Locked)
Generating images with diffusers pipelines28m 20s
-
(Locked)
Deconstructing the diffusion process19m 9s
-
(Locked)
Forward process as encoder16m 47s
-
(Locked)
Reverse process as decoder7m 18s
-
(Locked)
Interpolating diffusion models9m 26s
-
(Locked)
Image-to-image translation with SDEdit8m 4s
-
(Locked)
Image restoration and enhancement11m 23s
-
(Locked)
-
-
-
-