2021 article

GENERATIVE INFORMATION FUSION

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), pp. 3990–3994.

By: K. Tran n, W. Sakla* & H. Krim n

author keywords: multimodal fusion; remote sensing; gans
TL;DR: The ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources is demonstrated, and experiments show that emulating a multi-modal system by perturbing a single modality with noise can help us achieve competitive results compared to using multiple modalities. (via Semantic Scholar)
UN Sustainable Development Goal Categories
13. Climate Action (Web of Science)
15. Life on Land (Web of Science)
Source: Web Of Science
Added: November 29, 2021

In this work, we demonstrate the ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources. This is tantamount to developing proxy sensing capabilities for multi-modal learning. In classical fusion, multiple sensors are required to capture different information about the same target. Maintaining and collecting samples from multiple sensors can be financially demanding. Additionally, the effort necessary to ensure a logical mapping between the modalities may be prohibitively limiting. We examine the scenario where we have access to all modalities during training, but only a single modality at testing. In our approach, we initialize the parameters of our single modality inference network with weights learned from the fusion of multiple modalities through both classification and GANs losses. Our experiments show that emulating a multi-modal system by perturbing a single modality with noise can help us achieve competitive results compared to using multiple modalities.