2021 journal article

Event driven sensor fusion

SIGNAL PROCESSING, 188.

By: S. Roheda n, H. Krim n, Z. Luo* & T. Wu n

author keywords: Sensor fusion; Multi-modal fusion; Event driven classification
topics (OpenAlex): Target Tracking and Data Fusion in Sensor Networks; Fault Detection and Control Systems; Distributed Sensor Networks and Detection Algorithms
TL;DR: This paper addresses the issue of coping with damaged sensors when using the model, by learning a hidden space between sensor modalities which can be exploited to safeguard detection performance. (via Semantic Scholar)
UN Sustainable Development Goals Color Wheel
UN Sustainable Development Goal Categories
16. Peace, Justice and Strong Institutions (OpenAlex)
Source: Web Of Science
Added: August 23, 2021

Multi sensor fusion has long been of interest in target detection and tracking. Different sensors are capable of observing different characteristics about a target, hence, providing additional information toward determining a target’s identity. If used constructively, any additional information should have a positive impact on the performance of the system. In this paper, we consider such a scenario and present a principled approach toward ensuring constructive combination of the various sensors. We look at Decision Level Sensor Fusion under a different light wherein each sensor is said to make a decision on occurrence of certain events that it is capable of observing rather than making a decision on whether a certain target is present. These events are formalized to each sensor according to its potentially extracted attributes to define targets. The proposed technique also explores the extent of dependence between features/events being observed by the sensors, and hence generates more informed probability distributions over the events. In our case, we will study two different datasets. The first one, combines a Radar sensor with an optical sensor for detection of space debris, while the second one combines a seismic sensor with an acoustic sensor in order to detect human and vehicular targets in a field of interest. Provided some additional information about the features of the object, this fusion technique can outperform other existing decision level fusion approaches that may not take into account the relationship between different features. Furthermore, this paper also addresses the issue of coping with damaged sensors when using the model, by learning a hidden space between sensor modalities which can be exploited to safeguard detection performance.