Advanced search
Start date
Betweenand

Interpretability and fairness in machine learning: Capacity-based functions and interaction indices

Grant number: 21/11086-0
Support type:Scholarships abroad - Research Internship - Post-doctor
Effective date (Start): April 01, 2022
Effective date (End): March 31, 2023
Field of knowledge:Engineering - Electrical Engineering - Telecommunications
Principal researcher:Leonardo Tomazeli Duarte
Grantee:Guilherme Dean Pelegrina
Supervisor abroad: Michel Grabisch
Home Institution: Faculdade de Ciências Aplicadas (FCA). Universidade Estadual de Campinas (UNICAMP). Limeira , SP, Brazil
Research place: Université Paris 1 Panthéon-Sorbonne, France  
Associated to the scholarship:20/10572-5 - Novel approaches for fairness and transparency in machine learning problems, BP.PD

Abstract

Currently, the research community has been putting an effort into the development of mechanisms that improve interpretability and fairness in machine learning problems. Recent works addressed interpretability by means of model-agnostic method called SHAP. This method is based on the Shapley value, a classical concept in cooperative game theory that indicates the marginal contribution of an attribute in the output model. However, the idea of the Shapley value, as well as for other importance values, can be extended to coalitions of attributes. Therefore, we will investigate if these methods can be used to understand either the effect of single attributes or interactions between them in the trained model. Moreover, we may use these interpretations to understand how the trained model leads to unfair results.Most of the model-agnostic methods for local explanation (e.g., SHAP method) approximate the model to a linear interpretable function. In the literature, one may find capacity-based functions, such as the Choquet integral and the multilinear model, that could be used to locally fit the model and provide a clear understanding of the attributes in the output model. Indeed, the capacity coefficients in the Choquet integral are directly associated to the Shapley values. For the multilinear model, the parameters are associated to the Banzhaf values, another concept in cooperative game theory. Therefore, another goal would be to investigate if we could use these functions to improve interpretability in machine learning.It is worth mentioning that this internship research project lies in the context of the Post-Doctoral fellowship number 2020/10572-5, which comprises the investigation of novel approaches to deal with interpretability and fairness in machine learning.

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Please report errors in scientific publications list by writing to: cdi@fapesp.br.