Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Learning multimodal data has become increasingly widespread as collecting complementary information from multiple sources becomes easier and principled statistical inferences on multimodal data provide deeper insights into an event of interest. Nevertheless, learning multimodal data poses practical challenges such as many modern data are high-dimensional, signals from different sources are often highly correlated, and the types and dimensions of data coming from multiple sources are different. This dissertation studies effective learning methods to handle such problems in classification settings. Using deep neural network models, we propose new combining methods that take the importance of each modality into account and weight modalities based on their uncertainty for prediction. We demonstrate the practical efficacy of the proposed methods on digit classification and Alzheimer's disease classification problems. The numerical experiment results confirm that the proposed method shows promising performance compared to other multimodal methods and proper feature selection further improves classification accuracy. Moreover, the proposed method gives the most reliable prediction for a new observation compared to other multimodal data learning methods.

Details

PDF

Statistics

from
to
Export
Download Full History