论文标题

红外和可见图像融合中的显式和隐式模型

Explicit and implicit models in infrared and visible image fusion

论文作者

Wang, Zixuan, Sun, Bin

论文摘要

红外图像作为多模式图像对,在同一场景的表达中显示出显着差异。图像融合任务面临两个问题:一个是保持不同方式之间的独特功能,而另一个是在本地和全局功能(例如本地和全局特征)中维护功能。本文讨论了图像融合中深度学习模型的局限性和相应的优化策略。基于人为设计的结构和约束,我们将模型分为明确的模型,并将模型自适应地学习高级功能或可以建立全局像素关联。筛选了21个测试组的十种比较实验模型。定性和定量结果表明,隐式模型具有更全面的学习图像特征的能力。同时,需要提高它们的稳定性。针对现有算法将解决的优势和局限性,我们讨论了多模式图像融合和未来研究方向的主要问题。

Infrared and visible images, as multi-modal image pairs, show significant differences in the expression of the same scene. The image fusion task is faced with two problems: one is to maintain the unique features between different modalities, and the other is to maintain features at various levels like local and global features. This paper discusses the limitations of deep learning models in image fusion and the corresponding optimization strategies. Based on artificially designed structures and constraints, we divide models into explicit models, and implicit models that adaptively learn high-level features or can establish global pixel associations. Ten models for comparison experiments on 21 test sets were screened. The qualitative and quantitative results show that the implicit models have more comprehensive ability to learn image features. At the same time, the stability of them needs to be improved. Aiming at the advantages and limitations to be solved by existing algorithms, we discuss the main problems of multi-modal image fusion and future research directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源