Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:
| Indexado |
|
||||
| DOI | 10.1109/TCSS.2023.3249152 | ||||
| Año | 2023 | ||||
| Tipo | artículo de investigación |
Citas Totales
Autores Afiliación Chile
Instituciones Chile
% Participación
Internacional
Autores
Afiliación Extranjera
Instituciones
Extranjeras
Smart video surveillance plays a significant role in public security via storing a huge amount of continual stream data, evaluates them, and generates warns where undesirable human activities are performed. Recognition of human activities in video surveillance has faced many challenges such as optimal evaluation of human activities under growing volumes of streaming data with complex computation and huge time processing complexity. To tackle these challenges we introduce a lightweighted spatial-deep features integration using multilayer GRU (SDIGRU). First, we extract spatial and deep features from frames sequence of realistic human activity videos via utilizing a lightweight MobileNetV2 model and then integrate those spatial-deep features. Although deep features can be used for human activity recognition, they contain only the high-level appearance, which is insufficient to correctly identify the particular activity of human. Thus, we jointly apply deep information with spatial appearance to produce detailed level information. Furthermore, we select rich informative features from spatial-deep appearances. Then, we train multilayer gated recurrent unit (GRU) and feed informative features to learn the temporal dynamics of human activity frames sequence at each time step of GRU. We conduct our experiments on benchmark YouTube11, HMDB51, and UCF101 datasets of human activity recognition. The empirical results show that our method achieved significant recognition performance with low computational complexity and quick response. Finally, we compare the results with existing state-of-the-art techniques, which show the effectiveness of our method.
| Ord. | Autor | Género | Institución - País |
|---|---|---|---|
| 1 | Ahmad, Tariq | - |
Guilin Univ Elect Technol - China
Guilin University of Electronic Technology - China |
| 2 | Wu, Jinsong | - |
Guilin Univ Elect Technol - China
Universidad de Chile - Chile Guilin University of Electronic Technology - China |
| Fuente |
|---|
| CONICYT FONDECYT |
| CONICYT FONDEF |
| Chile CONICYT FONDECYT Regular |
| Chile CONICYT FONDEF |
| China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) |
| Specific Research Project of Guangxi for Research Bases and Talents |
| China Guangxi Science and Technology Plan Project |
| Agradecimiento |
|---|
| This work was supported in part by the China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) under Grant 2022AC20001, in part by the Chile CONICYT FONDECYT Regular under Grant 1181809, and in part by the Chile CONICYT FONDEF under Grant ID16I10466. |
| This work was supported in part by the China Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project) under Grant 2022AC20001, in part by the Chile CONICYT FONDECYT Regular under Grant 1181809, and in part by the Chile CONICYT FONDEF under Grant ID16I10466. |