The goal of fairness-aware data mining is to analyze data while taking into account potential issues of fairness, discrimination, neutrality, and/or independence. Pedreschi, Ruggieri, and Turini in KDD2008 firstly posed this problem, and a literature about this topic was emerged.

### General discussion about fairness-aware data mining

Two major tasks of FADM are unfairness detection and unfairness prevention tasks. A unfairness detection task aims to find unfair treatments in database. The aim of a unfairness prevention task is to learn a statistical model from potentially unfair data sets so that the sensitive feature does not influence the model’s outcomes, where a sensitive feature represents information that is wanted not to influence outcomes, such as socially sensitive information or information that a user want to ignore.

### Contents

## Fairness-aware Classification

A vertical plane depicts a model sub-space of distributions that are represented by a parametric model. In the case of a standard classification task, the goal of the task is to find the best parameter, such that the resulting distribution, \(\hat{\Pr}[Y,\mathbf{X},S;\boldsymbol{\Theta}^{\ast}]\), best approximates a true distribution, \(\Pr[Y,\mathbf{X},S]\). The best estimated distribution is chosen so as to minimize the divergence between \(\Pr[Y,\mathbf{X},S]\) and \(\hat{\Pr}[Y,\mathbf{X},S;\boldsymbol{\Theta}^{\ast}]\) ((a) in the figure)

We turn to a case of fairness-aware classification. The goal of a fairness-aware classification task is to find a fair estimated model, \(\hat{\Pr}^\dagger[Y,\mathbf{X},S;\boldsymbol{\Theta}^{\ast}]\), that best approximates a fair true distribution, \(\Pr^\dagger[Y,\mathbf{X},S]\).

A horizontal plane depicts a fair sub-space of distributions that satisfies a pre-specified fairness constraint. A fair true distribution, \(\Pr^\dagger[Y,\mathbf{X},S]\), must be in this fair sub-space. A parametric model of fair estimated distributions, \({\hat{\Pr}}^\dagger[Y,\mathbf{X},S;\boldsymbol{\Theta}^{\ast}]\), must be in the product sub-space of fair and model sub-spaces, depicted by a thick line in the figure. Our goal is to find the best parameter so as to minimize the divergence between a fair true distribution and a fair estimated distribution ((b) in the figure) Unfortunately, we cannot sample from a fair true distribution due to the potential unfairness of actual decisions in real world.

We therefore minimize the divergence (c) between a fair distribution and a fair estimated distribution.

Program Codes: Fairness-Aware Classification (Soft & Data)

### Analysis of the influence of biases on fairness

We analyzed the influence of a model bias and a decision rule on fairness.

Publication: ICDMW13

### Logistic Regression with Prejudice Remover Regularizer

We stated a fairness-aware classification model as an optimization problem. A penalty term that enhancing the statistical independence between a target variable and a sensitive feature is adopted as a regularizer.

Publication: ECMLPKDD12, ICDMW11

## Independence-Enhanced Recommender System

Recommendation independence is defined as unconditional statistical independence between a recommendation result and a sensitive feature. An *independence-enhanced recommender system* (IERS) is a recommender system that maintains recommendation independence. An examples of IERS applications are the adherence to laws and regulations by the recommendation service, and the fair treatment of content providers, the exclusion of unwanted information.

Program Codes: Independence-Enhanced Reommender System (Soft & Data)

### Independence-Enhanced Probabilistic Matrix Factorization Model

We introduce the penalty term to a probabilistic matrix factorization model to enhance the independence. The penalty term is designed so as to quantify the degree of independence between a binary sensitive feature and an predicted rating score.

Publication: RecSysP14, RecSysW13, RecSysW12

## Links to Related Sites

Workshops

- Privacy and Discrimination in Data Mining ICDM2016
- Fairness, Accountability, and Transparency in Machine Learning NIPS2104, ICML2015, 2016
- Discrimination and Privacy-Aware Data Mining ICDM2012

Tutorials

Program codes

- fairml @ GitHub Sample implementations of FADM algorithms
- Conditional non discrimination by Žliobaitė
- DCUBE: Discrimination Discovery in Databases by Pedrescchi, Ruggieri, and Turini