|
Original Date: 03/08/1999
Revision Date: 01/18/2007
Information : Multisensor Data Fusion for Improved Fault Detection and Diagnostics
Multisensor data fusion is a continuous process dealing with the association, correlation, and combination of information from multiple sources. The Applied Research Laboratory at the Pennsylvania State University (ARL Penn State) is using this process to achieve refined condition estimation of machinery and to complete timely assessments of resulting consequences and their significance. This emerging technology is also being applied to Department of Defense (DoD) areas (e.g., automated target recognition, battlefield surveillance, guidance/control of autonomous vehicles) and non-DoD areas (e.g., monitoring complex machinery, medical diagnostics, smart buildings).
Data fusion techniques come from a wide range of disciplines including artificial intelligence and statistical estimation. By combining data from multiple sensors and related information from associated databases, these techniques achieve improved accuracies and more specific inferences compared to methods that use a single sensor. While data fusion is not a new concept, the emergence of new sensors, advanced processing techniques, and improved processing hardware make real-time fusion of data increasingly possible.
Until recently, the DoD primarily used data fusion systems for target tracking, automated identification of targets, and limited automated reasoning applications. Renewed interest by the DoD has changed data fusion technology from a loose collection of related techniques to an emerging, true engineering discipline with standardized terminology, techniques, and systems design principles. Techniques to design (or fuse) data are drawn from a diverse set of traditional disciplines including digital signal processing, statistical estimation, control theory, artificial intelligence, and classic numerical methods. In principle, fusion of multisensor data provides significant savings over single source data. Besides the statistical advantage gained by combining same-source data, the use of multiple types of sensors may increase the accuracy with which a quantity can be observed and characterized.
In 1996, the Joint Directors Laboratory Data Fusion Working Group was established to improve communications among military researchers and systems developers. The group’s initial effort to codify the terminology related to data fusion resulted in the creation of a process model for data fusion and a data fusion lexicon. The Data Fusion Domain (Figure 3-1) is a functionally-oriented, process model of data fusion, and intended to be very general and useful across multiple application areas. This model identifies the processes, functions, categories, and specific techniques applicable to data fusion. The most mature area of data fusion processing is Level 1 which uses multisensor data to determine the position, velocity, attributes, and identity of individual objects or entities. Levels 2 and 3 are currently dominated by knowledge based methods such as rule-based blackboard systems. These systems are relatively immature with numerous prototypes. Level 4 assesses and improves the performance operation of an ongoing data fusion process, and has a mixed maturity.
The data fusion community is rapidly evolving, and the ARL Penn State is among the leaders. Significant investments by the DoD have resulted in rapid evolutions of microprocessors, advanced sensors, and new technologies which, in turn, create new capabilities to combine data from multiple sensors for improved inferences. Applications are now entering the condition based maintenance (CBM) area. Implementation of such systems requires an understanding of basic terminology, data fusion process models, and architectures as demonstrated by the ARL Penn State.
Figure 3-1. Data Fusion Domain
For more information see the
Point of Contact for this survey.
|