Deep learning architectures, in particular the convolutional neural networks (CNNs) are responsible for recent research advances in computational vision and machine learning areas. Due to the fact these networks have achieved excellent results in different application domains. The resurgence of CNNs occurred in 2012 when the proposed AlexNet architecture managed to reduce the error rate of the ImageNet competition by 10% in the Large Scale Visual Recognition Challenge (ILSVRC). This surprising result attracted scientific community views for those deep learning networks and in a short time, the error rate dropped to 7.3% and nowadays the challenge is virtually solved with error rates of less than 3%. In 2014, a Google research group found that several machine learning models were vulnerable to adversarial examples. The addition of imperceptible noise in images was enough to fool any trained machine learning model. This fact has leveraged a new research field, adversarial pattern recognition, which aims the creation of robust learning models to data distribution different from used in the training process (adversarial examples). This undesired behavior of CNNs, observed in the literature, might be caused by problems occurred at various moments throughout the learning process. In this sense, this research project aims to study, evaluate and develop new deep learning approaches more robust to adversarial examples and thus improve the effectiveness results in the multimedia data classification tasks in real applications of the e-Science domain.
News published in Agência FAPESP Newsletter about the scholarship: