Decompose and Realign:
Tackling Condition Misalignment in
Text-to-Image Diffusion Models

1HKUST(GZ), 2HKUST, 3Zhejiang Lab, 4Zhejiang University, 5Adobe Research
*Indicates Equal Contribution.Indicates corresponding author.
MY ALT TEXT

Illustration of Decompose and Realign showcasing the ability to handle the misalignment between conditions for controllable generation tasks. For example, with several control conditions (e.g., depth, pose, bounding box) specifying the layout structure and the text condition indicating extra guidance (e.g., new object, new correspondence of spatial relationship), our method is able to generate high-quality plausible outputs that satisfy all given conditions without training.

Abstract

Text-to-image diffusion models have advanced towards more controllable generation via supporting various additional conditions (e.g., depth map, bounding box) beyond text. However, these models are learned based on the premise of perfect alignment between the text and extra conditions. If this alignment is not satisfied, the final output could be either dominated by one condition, or ambiguity may arise, failing to meet user expectations. To address this issue, we present a training-free approach called ``Decompose and Realign'' to further improve the controllability of existing models when provided with partially aligned conditions. The "Decompose" phase separates conditions based on pair relationships, computing the result individually for each pair. This ensures that each pair no longer has conflicting conditions. The "Realign" phase aligns these independently calculated results via a cross-attention mechanism to avoid new conflicts when combining them back. Both qualitative and quantitative results demonstrate the effectiveness of our approach in handling unaligned conditions, which performs favorably against recent methods and more importantly adds flexibility to the controllable image generation process.

Framework of Decompose and Realign

MY ALT TEXT

Challenges in Multi-Condition Image Synthesis.

Left: the "dominance" effect exists across various methods, where one object (e.g., car) overshadows the generation, omitting the other (e.g., dog).
Right: The "ambiguity" issue, where it's unclear which object from the text matches the intended depth condition.

MY ALT TEXT

Comparisons of "Dominance" Challenge.

D&R ensures that objects in the text, not linked to additional conditions, are still accurately generated (e.g., "tiger" in the first example).

MY ALT TEXT

Comparisons of "Ambiguity" Challenge.

Our method effectively binds condition to one of the text elements, thereby eliminating the "ambiguity" phenomenon.

MY ALT TEXT

Leveraging Generative Model Priors.

Our method, by aligning in the text-image cross attention, can better utilize the priors of the text-to-image model. Therefore, compared to (b), the images generated by our method are not limited to the relationships of 'and' or 'not' between objects. In contrast to (a), our approach eliminates the need for manual participation, meaning there's no requirement to provide a mask.

MY ALT TEXT

BibTeX

@misc{wang2023decompose,
      title={Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models}, 
      author={Luozhou Wang and Guibao Shen and Wenhang Ge and Guangyong Chen and Yijun Li and Ying-cong Chen},
      year={2023},
      eprint={2306.14408},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}