Advanced search
Start date
Betweenand

Machine Learning Under The Hood: Efficient Accelerators for Deep Networks and its Applicability in Scientific Computation

Grant number: 23/03328-9
Support Opportunities:Scholarships in Brazil - Doctorate
Effective date (Start): April 01, 2023
Effective date (End): March 31, 2025
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computer Systems
Principal Investigator:Guido Costa Souza de Araújo
Grantee:Lucas Fernando Alvarenga e Silva
Host Institution: Instituto de Computação (IC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Associated research grant:13/08293-7 - CCES - Center for Computational Engineering and Sciences, AP.CEPID

Abstract

Deep learning (DL) methods, especially Convolutional Neural Networks (CNNs), have brought revolutionary advances to many research areas due to their capacity to learn from raw data. However, such astonishing results were obtained at the cost of big and expensive models that run on top of unrealistic computational systems. That kind of huge models, like Stable Diffusion and GPT-3, are normally attached to Artificial intelligence (AI) big players that hold distributed systems with a large number of Graphic Processing Units (GPUs), a costly and power-hungry general-purpose accelerator for graphic processing tasks. On the other end, there is also the problem of AI democratization, which concerns the deployment of DL methods for a wide range of applications that must run on top of resource-constrained edge devices. In this case, it's mandatory to outsource the execution of DL models to third-party modules, such as domain-specific accelerators that can run complex models with low-power consumption in a feasible time. Based on either limitations, AI service providers have pushed toward the development of domain-specific hardware accelerators to further improve DL models data throughput, such as Google Tensor Processing Units and Amazon Inferentia devices that have been deployed on their Cloud Services for extensive workloads, and Neural Processor Units (NPUs) for resource-constrained devices. This research proposal aims to solve problems that arise from the use of DL-specific hardware accelerators, for instance, developing resource optimal usage strategies for resource-constrained devices, like data and parameter slicing, integration along with well-known AI frameworks, and its applicability on real-world tasks, especially, the protein-folding problem. We plan to investigate each of those activities along with the NeuroMorphic Processor (NMP) engine developed from LGE, an array of processors with specialized modules for matrix operations.

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Please report errors in scientific publications list using this form.