FEPC: Fairness Estimation Using Prototypes and Critics for Tabular Data

Amit Giloni, Edita Grolman, Yuval Elovici, Asaf Shabtai

2022 26th International Conference on Pattern Recognition (ICPR), 4877-4884, 2022

A machine learning (ML) fairness estimator, which is used to assess an ML model’s fairness, should satisfy several conditions when used in real-life settings. Specifically, it should: i) support a comprehensive fairness evaluation that explores all ethical aspects; ii) be flexible and support different ML model settings; iii) enable comparison between different evaluations and ML models; and iv) provide reasoning and explanations for the fairness assessments produced. Existing methods do not sufficiently satisfy all of the above conditions. In this paper, we present FEPC (Fairness Estimation using Prototypes and Critics for tabular data), a novel method for fairness assessment that provides explanations and reasoning for its assessments by using an adversarial attack and customized fairness measurement. Given an ML model and data records, FEPC performs a comprehensive fairness evaluation and produces a …