Files
Abstract
Recommending items to users with explanations helps increase the system’s transparency, persuasion, and satisfaction. The natural readability of user reviews holds great promise in developing explainable recommendation systems. However, current methods usually map review texts into incomprehensible latent representations. Also, many methods fuse user and item embeddings in the prediction process using deep networks that are incomprehensible to humans, exacerbating the lack of interpretability. In the thesis, we propose a novel model called DIRECT. It can exploit meaningful content in the raw text to explain recommendations. Moreover, the decision function is designed based on the generalized additive model (GAM), which makes the decision process of our model also interpretable. DIRECT achieves linear inference time complexity regarding the length of item reviews. We conduct experiments, including ablation studies, on multiple real-world datasets. Visualizations and case studies validate the performance and interpretability of DIRECT.