Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

The prevalence of machine learning applications in decision-making has sparked abundant interest in the fairness of machine learning. Existing notions of fairness are mainly defined upon predictions like equalized odds. This paper characterized the unfairness in regression from a new perspective by inspecting the prediction errors. In particular, we first defined a new fairness measurement with equalized error, which measures the dependence of prediction error on sensitive attributes. We then propose a regularization approach called Fairness Regularization with Equalized Error (FREE) which can handle more dimensions of fairness. We conducted two extensive experiments on both simulated datasets and real-world datasets to evaluate our approach's effectiveness in terms of mean square error, Hirschfeld-Gebelein-R\'{e}nyi (HGR) maximal correlation coefficient, and overlapping index. The results show that our approach reduces unfairness in error more effectively, compared with representative methods.

Details

PDF

Statistics

from
to
Export
Download Full History