From the course: Programming Generative AI: From Variational Autoencoders to Stable Diffusion with PyTorch and Hugging Face

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Reverse process as decoder

Reverse process as decoder

- [Instructor] And now that we have encoded our image into the diffusion model's latent space, we can now decode it and see if it actually reconstructs it somewhat accurately. To run the decoding process, we'll wrap everything in a decode function, just to kind of again, mirror this latent variable model API. This decode is really going to be the reverse process of the diffusion, and this is identical to what we ran before. So, when we're unconditionally generating an image with diffusion, we only run the reverse process. So, what we've seen so far is essentially running decode with something like pure noise or an encoded image encoded to t0 of 0 here. So, let's write our decode here. And for time's sake, I'm actually going to just pretty much copy what we saw before, since this whole loop is really all we need. This is exactly the reverse process of the diffusion model. The only slight change we're going to make is that, instead of taking pure noise as input, we're going to pass in…

Contents