Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Dynamic texture analysis using networks generated by deterministic partially self-avoiding walks

Texto completo
Autor(es):
Ribas, Lucas C. [1, 2] ; Bruno, Odemir M. [1, 2]
Número total de Autores: 2
Afiliação do(s) autor(es):
[1] Univ Sao Paulo, Sao Carlos Inst Phys, Sci Comp Grp, POB 369, BR-13560970 Sao Carlos, SP - Brazil
[2] Univ Sao Paulo, Inst Math & Comp Sci, Ave Trabalhador Sao Carlense 400, BR-13566590 Sao Carlos, SP - Brazil
Número total de Afiliações: 2
Tipo de documento: Artigo Científico
Fonte: PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS; v. 541, MAR 1 2020.
Citações Web of Science: 0
Resumo

Dynamic textures are sequences of images (video) with the concept of texture patterns extended to the spatiotemporal domain. This research field has attracted attention due to the range of applications in different areas of science and the emergence of a large number of multimedia datasets. Unlike the static textures, the methods for dynamic texture analysis also need to deal with the time domain, which increases the challenge for representation. Thus, it is important to obtain features that properly describe the appearance and motion properties of the dynamic texture. In this paper, we propose a new method for dynamic texture analysis based on Deterministic Partially Self-avoiding Walks (DPSWs) and network science theory. Here, each pixel of the video is considered a vertex of the network and the edges are given by the movements of the deterministic walk between the pixels. The feature vector is obtained by calculating network measures from the networks generated by the DPSWs. The modeled networks incorporate important characteristics of the DPSWs transitivity and their spatial arrangement in the video. In this paper, two different strategies are tested to apply the DPSWs in the video and capture appearance and motion characteristics. One strategy applies the DPSWs in three orthogonal planes and the other is based on spatial and temporal descriptors. We validate our proposed method in Dyntex++ and UCLA (and their variants) databases, which are two well-known dynamic texture databases. The results have demonstrated the effectiveness of the proposed approach using a small feature vector for both strategies. Also, the proposed method improved the performance when compared to the previous DPSW-based method and the network-based method. (C) 2019 Published by Elsevier B.V. (AU)

Processo FAPESP: 16/23763-8 - Modelagem e análise de redes complexas para visão computacional
Beneficiário:Lucas Correia Ribas
Linha de fomento: Bolsas no Brasil - Doutorado
Processo FAPESP: 16/18809-9 - Deep learning e redes complexas aplicados em visão computacional
Beneficiário:Odemir Martinez Bruno
Linha de fomento: Auxílio à Pesquisa - Parceria para Inovação Tecnológica - PITE
Processo FAPESP: 14/08026-1 - Visão artificial e reconhecimento de padrões aplicados em plasticidade vegetal
Beneficiário:Odemir Martinez Bruno
Linha de fomento: Auxílio à Pesquisa - Regular