Advanced search
Start date
Betweenand

Security and privacy in machine learning models to medical images against adversarial attacks

Grant number: 21/08982-3
Support Opportunities:Scholarships in Brazil - Doctorate
Effective date (Start): March 01, 2022
Effective date (End): February 28, 2026
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computer Systems
Principal Investigator:Agma Juci Machado Traina
Grantee:Erikson Júlio de Aguiar
Host Institution: Instituto de Ciências Matemáticas e de Computação (ICMC). Universidade de São Paulo (USP). São Carlos , SP, Brazil

Abstract

With the advent of big data, data is being produced and generated on a large scale, used by Machine Learning (ML) models to generate new knowledge. Several areas have benefited from big data and ML, one of them being healthcare, which can employ complex data such as images to assist medical experts in decision making. While these concepts are valuable for healthcare, they can lead to issues regarding patient privacy and security. Information leaks in healthcare systems frequently. For example, in 2020, data of 200,000 patients from public health systems in Brazil was exposed. The ML models employed in healthcare are susceptible to attacks that poison the input data, the model itself and cause problems in the test data. In addition, they can present both known and unknown backdoors. The area of study that proposes defense and attack strategies against ML models is adversarial machine learning, which aims to reduce the model's reliability and cause the model to misclassify the data. Therefore, this project aims to devise a framework consisting of defense, vulnerability exploitation, and attack models to understand and combat security and privacy violations in pattern recognition models in medical images. Medical images are used as input to ML models to recognize patterns and support medical decision-making. However, these images, such as the models that classify them, can suffer attacks to invalidate their robustness or compromise the patient's privacy. In this project, we hope to: (I) develop defensive algorithms against adversarial examples; (II) devise methods to preserve patient privacy; (III) exploit new vulnerabilities and backdoors that ML models may present; (IV) propose attack strategies and their respective defenses, to communicate to other researchers the possible paths an attacker may follow. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Please report errors in scientific publications list by writing to: cdi@fapesp.br.