Colección SciELO Chile

Departamento Gestión de Conocimiento, Monitoreo y Prospección
Consultas o comentarios: productividad@anid.cl
Búsqueda Publicación
Búsqueda por Tema Título, Abstract y Keywords



SDIGRU: Spatial and Deep Features Integration Using Multilayer Gated Recurrent Unit for Human Activity Recognition
Indexado
WoS WOS:000954030700001
Scopus SCOPUS_ID:85149902800
DOI 10.1109/TCSS.2023.3249152
Año 2023
Tipo artículo de investigación

Citas Totales

Autores Afiliación Chile

Instituciones Chile

% Participación
Internacional

Autores
Afiliación Extranjera

Instituciones
Extranjeras


Abstract



Smart video surveillance plays a significant role in public security via storing a huge amount of continual stream data, evaluates them, and generates warns where undesirable human activities are performed. Recognition of human activities in video surveillance has faced many challenges such as optimal evaluation of human activities under growing volumes of streaming data with complex computation and huge time processing complexity. To tackle these challenges we introduce a lightweighted spatial-deep features integration using multilayer GRU (SDIGRU). First, we extract spatial and deep features from frames sequence of realistic human activity videos via utilizing a lightweight MobileNetV2 model and then integrate those spatial-deep features. Although deep features can be used for human activity recognition, they contain only the high-level appearance, which is insufficient to correctly identify the particular activity of human. Thus, we jointly apply deep information with spatial appearance to produce detailed level information. Furthermore, we select rich informative features from spatial-deep appearances. Then, we train multilayer gated recurrent unit (GRU) and feed informative features to learn the temporal dynamics of human activity frames sequence at each time step of GRU. We conduct our experiments on benchmark YouTube11, HMDB51, and UCF101 datasets of human activity recognition. The empirical results show that our method achieved significant recognition performance with low computational complexity and quick response. Finally, we compare the results with existing state-of-the-art techniques, which show the effectiveness of our method.

Métricas Externas



PlumX Altmetric Dimensions

Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:

Disciplinas de Investigación



WOS
Computer Science, Information Systems
Computer Science, Cybernetics
Scopus
Sin Disciplinas
SciELO
Sin Disciplinas

Muestra la distribución de disciplinas para esta publicación.

Publicaciones WoS (Ediciones: ISSHP, ISTP, AHCI, SSCI, SCI), Scopus, SciELO Chile.

Colaboración Institucional



Muestra la distribución de colaboración, tanto nacional como extranjera, generada en esta publicación.


Autores - Afiliación



Ord. Autor Género Institución - País
1 Ahmad, Tariq - Guilin Univ Elect Technol - China
Guilin University of Electronic Technology - China
2 Wu, Jinsong - Guilin Univ Elect Technol - China
Universidad de Chile - Chile
Guilin University of Electronic Technology - China

Muestra la afiliación y género (detectado) para los co-autores de la publicación.

Financiamiento



Fuente
CONICYT FONDECYT
CONICYT FONDEF
Chile CONICYT FONDECYT Regular
Chile CONICYT FONDEF
China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project)
Specific Research Project of Guangxi for Research Bases and Talents
China Guangxi Science and Technology Plan Project

Muestra la fuente de financiamiento declarada en la publicación.

Agradecimientos



Agradecimiento
This work was supported in part by the China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) under Grant 2022AC20001, in part by the Chile CONICYT FONDECYT Regular under Grant 1181809, and in part by the Chile CONICYT FONDEF under Grant ID16I10466.
This work was supported in part by the China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) under Grant 2022AC20001, in part by the Chile CONICYT FONDECYT Regular under Grant 1181809, and in part by the Chile CONICYT FONDEF under Grant ID16I10466.

Muestra la fuente de financiamiento declarada en la publicación.