Advanced search
Start date
Betweenand

Antisparsity and equity: from blind source separation to fair machine learning

Grant number: 22/04237-4
Support Opportunities:Scholarships abroad - Research Internship - Doctorate
Effective date (Start): August 22, 2022
Effective date (End): August 21, 2023
Field of knowledge:Engineering - Electrical Engineering
Principal Investigator:João Marcos Travassos Romano
Grantee:Renan Del Buono Brotto
Supervisor: Jean-Michel Loubes
Host Institution: Faculdade de Engenharia Elétrica e de Computação (FEEC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Research place: Artificial and Natural Intelligence Toulouse Institute (ANITI), France  
Associated to the scholarship:19/20899-4 - Antisparsity and Equidity in signal processing: from blind source separation to fairness machine learning, BP.DR

Abstract

Machine Learning Models have received a lot of attention in a variety of problems due to their great performances and ability to learn from data. But the application of such models in social-economic problems---credit assessment, recidivism prediction, students selection, among others--- has raised ethical concerns. Biased data usually leads the model to biased decisions, which will produce more biased data creating a vicious circle. Therefore, it is imperative to develop mathematical tools to deal with biased Machine Learning models. In the last years, we have witnessed a great research interest in this problem, originating the so called Fair Machine Learning. It is possible to tackle such problem by treating the available data before it is fed to the model (preprocessing techniques); also, one can use an already running model and post-process its decisions in order to reduce negative social impacts (post-processing techniques); and finally, it is possible to build models that can learn the fairness constraints during its own training (in-processing techniques).Our main objective in this research proposal is to study in-processing techniques and to theoretically assess their quality. One would expect that a fair model should treat similar individuals --- individuals who have the same attributes but differ only in the sensitive one --- in similar ways. If we provide a measure of distance between individuals of the dataset, treating distant individuals in similar ways will also lead to a fair treatment to the closer ones. The Robust Learning is a very suitable framework to accomplish such requirement, and is the main technique of our study. However, since the concept of Fairness is very open, we are also interested in studying other approaches to Fair Machine Learning. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Please report errors in scientific publications list using this form.