By David Griffin
Algorithms dominate many aspects of our lives. They play largely unseen roles in everything from determining which advertisements target us online to who will be hired for a job. Since these algorithms are based in mathematics, it is understandable to assume they are fair and unbiased. However, this is rarely the case. Since they are predominantly written either directly or indirectly by humans, they are often filled with unidentified biases which can negatively impact women and minorities.
A recent paper by Verma and Rubin (2018) sought to explore definitions of fairness for classification in machine learning (ML). In this context, classification refers to the labelling or categorising of data based on predictions made from a large exemplary dataset. Common definitions of fairness found in artificial intelligence (AI) and ML literature were then illustrated by the authors using a single example; the German Credit Dataset (GCD) (Lichman, 2013). This dataset, used in many papers to explore fairness through example, contains information about 1,000 German loan applicants from the 1990s with their individual characteristics as applicants. The 20 characteristics listed include details such as marital status, age, gender, number of dependents, credit history and occupation. In their paper, Verma and Rubin (2018) illustrated the various definitions of fairness used in AI and ML literature, with an emphasis on gender bias and discrimination. An overview of this work is provided in the following section.
‘Group fairness’, ‘Statistical parity’, ‘Benchmarking’ or ‘Equal acceptance rate’ - This definition of fairness is met if individuals with both protected and unprotected characteristics are equally likely to receive a positive classification prediction. Protected characteristics are those which should not be used to discriminate between people, such as gender or disability. In the example of the GCD, this would mean that male and female applicants would have equal likelihood of receiving a good credit score prediction.
‘Conditional statistical parity’ – This definition of fairness implies that subjects of both protected and unprotected characteristics are equally likely to receive a positive classification prediction, when other legitimate factors are taken into account. When using the GCD example, legitimate factors would include credit history and employment. Thus, male and female applicants, given parity in credit history and employment credentials, would be equally likely to be predicted a good credit score.
‘Predictive parity’ or ‘Outcome test’ - This definition of fairness is satisfied if subjects of both protected and unprotected characteristics have equal probability of receiving a positive classification when predicted one due to their characteristics. Using the GCD example, this would mean that both male and female applicants would actually receive a good credit score when predicted a good credit score. That is to say they have equal positive predicted value (PPV).
‘False positive error rate balance’ or ‘Predictive equality’– This definition is true of a classifier if subjects with protected and unprotected characteristics are equally likely to receive a positive classification when predicted a negative one. In the GDC example, this would mean that both male and female applicants with bad credit scores would be equally likely to be predicted a good credit score by the classifier.
‘False negative error rate balance’ or ‘Equal opportunity’ – This definition is the reverse of the last; it is true of a classifier if subjects with protected and unprotected characteristics are equally likely to receive a negative classification when predicted a positive one. Intuitively, in the GDC example, this would mean that both male and female applicants with good credit scores would be equally likely to be predicted a bad one by the classifier.
‘Equalised odds’, ‘Conditional procedure accuracy equality’ or ‘Disparate mistreatment’ – This definition can be deemed true if both of the previous two definitions are true.
‘Conditional use accuracy equality’ – This definition of fairness is met by a classifier if the probability of those who receive a positive classification and are deserving of one is equal to the probability of those who receive a negative classification and are deserving of one, for subjects with both protected and unprotected characteristics. Using the GDC example, this would suggest that both male and female applicants would be equally likely to receive the appropriate credit score classification.
‘Treatment equality’ – This definition is met by a classifier if both subjects with protected and unprotected characteristics have an equal ratio of false positive classifications to false negative classifications. In the GDC example, this would mean that the ratio of those receiving incorrectly positive predicted credit scores to those receiving incorrectly negative predicted credit scores would be the same for both male and female applicants.
‘Test fairness’, ‘Calibration’, ‘Matching conditional frequencies’ or ‘Well-calibration’ – This definition of fairness is met if subjects with protected and unprotected characteristics are both equally likely to belong to the positive classification. Within the GDC example, this would mean that both male and female applicants who apply for a loan would be equally likely to actually hold a good credit score.
‘Balance for positive class’ – This definition is met if subjects with protected and unprotected characteristics have, on average, the same predicted probability of being classified positively. In the GDC example, this would mean that both male and female loan applicants with a good credit score would have equal probability of being predicted a good credit score.
‘Balance for negative class’ – This definition of fairness is essentially the opposite of the last. It is met if subjects with protected and unprotected characteristics have, on average, the same probability of being classified as negative. Within the context of the GDC example, this would mean that both male and female loan applicants with a bad credit score would have equal probability of being predicted a bad credit score.
‘Causal discrimination’ – A classifier meets this definition of fairness if all subjects with identical attributes are classed as equal. In the GDC example, this would mean that both male and female loan applicants with identical attributes would both either receive a good or bad predicted credit score.
‘Fairness through unawareness’ – This definition of fairness is similar to the last. It is met if a classifier does not use protected characteristics in its classification. In the context of the GDC example, this would mean that whether a subject was male or female would not be considered when applying for a loan.
Verma and Rubin (2018) provide a concise overview of definitions of fairness in classification through ML. It is hoped that this summary overview will provide readers with an introduction to the many and varied designations of fairness. It is clear from their work that measuring fairness is a complex and challenging task. The authors stress that the applicable definitions will vary with the intended use of each and every system. They also highlight the fact that while verifiable outcomes are available for training data, it is challenging to ascertain whether real data will follow the same distribution. Consequently, inequality may exist in a system despite its training to avoid this.
[Source Paper] Verma, S., Rubin, J. 2018. Fairness Definitions Explained. 2018 ACM/IEEE International Workshop on Software Fairness (FairWare), 2018, pp. 1-7.
DOI: 10.1145/3194770.3194776
[1] Lichman, M. 2013. UCI Machine Learning Repository.
Available Here.