Files
Abstract
The 3D Convolutional Neural Networks have achieved great success in many applications, including Alzheimer's Disease classification on medical image volumes. To increase their classification transparency and verify their prediction credibility, we uncover the 3D classification networks applied on 3D MNIST and OASIS-2, using two visualization techniques in deep learning, i.e., the Class Activation Mapping (CAM) and Layer-wise Relevance Propagation (LRP). We evaluate the performance of their resulting heatmaps in representing the relevance scores to the network's prediction from three perspectives: 1) visual interpretability, 2) quantitative measurement based on the Area Over the Perturbation Curve (AOPC), and 3) sanity check. The experimental comparison between CAM and LRP shows that CAM suffers the inconsistency between visual interpretability and heatmap quality, and LRP locates visually more meaningful regions for classification while could fail the sanity check.