Advanced search
Start date
Betweenand

Automatic synthesis of arbitrarily connected artificial neural networks

Grant number: 12/06646-7
Support Opportunities:Scholarships in Brazil - Post-Doctorate
Effective date (Start): July 01, 2012
Effective date (End): June 30, 2015
Field of knowledge:Engineering - Electrical Engineering - Industrial Electronics, Electronic Systems and Controls
Principal Investigator:Fernando José von Zuben
Grantee:Wilfredo Jaime Puma Villanueva
Host Institution: Faculdade de Engenharia Elétrica e de Computação (FEEC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil

Abstract

This project aims at studying the process of automatic synthesis of arbitrarily connected neural networks (ACNNs), in the context of supervised learning from data. In the area of machine learning, there is an increasing demand for methodologies characterized by a higher degree of autonomy in design and capable of producing effective solutions to a wider set of objectives and restrictions associated with the design. The synthesis can be interpreted as the simultaneous and automatic proposition of the architecture and synaptic weights of the neural network, looking for maximum performance, and arbitrary connections mean their definition without obeying any preconceived pattern or rule. So, recurrent and non-recurrent connections are admissible between any pair of neurons, including self-connections. The main expected benefit is the neural network capability to match the problem demands, being them linear, nonlinear, or hybrid. As a consequence, the obtained neural networks may converge to purely linear, nonlinear or hybrid architectures, recurrent or not, in response to the demands of the intended application. The greatest challenge is the synthesis process, as long as restrictions present in traditional models, such as multilayer perceptron, cascade correlation, and recurrent networks already proposed in the literature, are relaxed, the complexity to define the proper architecture and to perform the training process increases significantly. The fundamental aspects to be investigated in this research project are: (i) implementation of evolutionary algorithms and constructive algorithms to synthesize ACNNs; (ii) study and formulation of an innovative procedure to analytically compute the gradient vector related to the synaptic weights of the ACNNs, to be adopted by global and local second order learning algorithms; (iii) use of GPUs (Graphic Processing Units) to execute the proposed algorithms in parallel computational devices; (iv) evaluation of the proposed algorithms when applied to problems of practical relevance in the areas of regression, classification and data compression, including comparative analysis with state-of-the-art contenders.

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Please report errors in scientific publications list by writing to: cdi@fapesp.br.