Interactive nonlocal joint learning network for red, green, blue plus depth salient object detection

Peng LI, Zhilei CHEN, Haoran XIE, Mingqiang WEI, Fu Lee WANG, Kwok Shing CHENG

Research output: Contribution to journalArticlespeer-review

Abstract

Research into red, green, blue plus depth salient object detection (SOD) has identified the challenging problem of how to exploit raw depth features and fuse cross-modal (CM) information. To solve this problem, we propose an interactive nonlocal joint learning (INL-JL) network for quality RGB-D SOD. INL-JL benefits from three key components. First, we carry out joint learning to extract common features from RGB and depth images. Second, we adopt simple yet effective CM fusion blocks in lower levels while leveraging the proposed INL blocks in higher levels, aiming to purify the depth features and to make CM fusion more efficient. Third, we utilize a dense multiscale transfer strategy to infer saliency maps. INL-JL advances the state-of-the-art methods on five public datasets, demonstrating its power to promote the quality of RGB-D SOD. Copyright © 2022 SPIE and IS&T.

Original languageEnglish
Article number063040
JournalJournal of Electronic Imaging
Volume31
Issue number6
DOIs
Publication statusPublished - Dec 2022

Citation

Li, P., Chen, Z., Xie, H., Wei, M., Wang, F. L., & Cheng, G. (2022). Interactive nonlocal joint learning network for red, green, blue plus depth salient object detection. Journal of Electronic Imaging, 31(6). Retrieved from https://doi.org/10.1117/1.JEI.31.6.063040

Fingerprint

Dive into the research topics of 'Interactive nonlocal joint learning network for red, green, blue plus depth salient object detection'. Together they form a unique fingerprint.