title: “[Summary] MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation” date: 2023-05-19 tags: - Diffusion Models - Image Editing - Controllability TL;DR To enable a more controllable image diffusion, MultiDiffusion introduce patches generation with a global constrain. Problem statements Diffusion models lack user controllability and methods that offer such control require a costly fine-tuning. Method The method can be reduced to the following algorithm: At each time step t: Extract patches from the global image I_{t-1} Execute the de-noising step to generate the patches J_{i,t} Combine the patches by average their pixel values to create the global image I_t For the panorama use case: simply generate N images with overlapping regions between them....

1 min · 146 words

title: “[Summary] Break-A-Scene: Extracting Multiple Concepts from a Single Image” date: 2023-07-21 tags: - Diffusion Models - Concept Extraction - Image Generation TL;DR Fine-tuning of a diffusion model using a single image to generate images conditions on user-provided concepts. Problem statements Diffusion models are not able to generate a new image of user-provided concepts. Methods (DreemBooth) that enable this capabilities require several input images that contain the desired concept. Method The method consists of two phases....

2 min · 362 words