Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

As Graph Neural Networks (GNNs) find increasing applications, the fairness of these networks has emerged as a pressing issue. This thesis presents an innovative exploration into the domain of fairness in graph learning, where we introduce a novel problem concerning disparate data distributions in the training and test sets. Our experiments investigate the limitations of existing fairness graph learning methodologies, revealing their potential failure to mitigate bias when confronted with differing data distributions between the training and test sets. To address this challenge, we propose an innovative framework capable of managing such disparities, thus enhancing the fairness of outcomes. The experiments demonstrate that our method outperforms the state of art model regarding fairness metrics and maintains a comparable prediction accuracy.

Details

PDF

Statistics

from
to
Export
Download Full History