Fairness in Machine Learning is an area that aims to use techniques focused on mitigating discriminatory effects in decisions executed by Machine Learning, preserving as much as possible the accuracy of the decision. In this area, there are three possibilities of intervention to produce fairer classifiers, which are distinguished among them by the intervening stage. They are pre-processing, in-processing, and post-processing. Pre-processing approaches have some advantages, such as not needing to modify the classifier, nor to access sensitive information (e.g., race, sex, religion, etc.) in the training or testing stages, and also because they are approaches that can be used by other Machine Learning tasks. Because it is a recent area, it demands papers that comparatively analyze the various algorithms that are being proposed in the literature. Motivated by this gap and by the aforementioned advantages, this research project intends to conduct a comparative analysis of the pre-processing methods proposed in Fairness in Machine Learning, to identify, understanding, and relate the most adequate and efficient algorithms for each different concept of fairness, performance measure, and type of data.
News published in Agência FAPESP Newsletter about the scholarship: