MIRFuse: an infrared and visible image fusion model based on disentanglement representation via mutual information regularization
Résumé
In the domain of image fusion, integrating infrared and visible images provides a more complete scene description by merging the unique strengths of each modality. Existing methods struggle with handling the differences between modalities, which is caused by the inherent entanglement of scene-common information and modality-specific information within each modality. In response, we propose MIRFuse, a model for infrared and visible image fusion based on disentanglement representation via mutual information regularization. The process of disentanglement, in which scene-common information and modality-specific information are separated, forms the basis for identifying both shared and exclusive features. First, mutual information maximization is used as consistency constraint, enabling scene-common encoders to effectively extract shared features. Second, the Hilbert–Schmidt independence criterion is employed as heterogeneity constraint, promoting modality-specific encoders to extract exclusive features. Finally, both shared and exclusive features are identified and combined using various fusion strategies to produce a fused image. The resulting fused image provides a comprehensive representation of the entire scene, allowing for more effective utilization of information from multiple modalities. Our experiments have validated the advanced nature and effectiveness of our method.