Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:
| Indexado |
|
||||
| DOI | 10.3390/S21082637 | ||||
| Año | 2021 | ||||
| Tipo | artículo de investigación |
Citas Totales
Autores Afiliación Chile
Instituciones Chile
% Participación
Internacional
Autores
Afiliación Extranjera
Instituciones
Extranjeras
Convolutional neural networks (CNN) have been extensively employed for image classification due to their high accuracy. However, inference is a computationally-intensive process that often requires hardware acceleration to operate in real time. For mobile devices, the power consumption of graphics processors (GPUs) is frequently prohibitive, and field-programmable gate arrays (FPGA) become a solution to perform inference at high speed. Although previous works have implemented CNN inference on FPGAs, their high utilization of on-chip memory and arithmetic resources complicate their application on resource-constrained edge devices. In this paper, we present a scalable, low power, low resource-utilization accelerator architecture for inference on the MobileNet V2 CNN. The architecture uses a heterogeneous system with an embedded processor as the main controller, external memory to store network data, and dedicated hardware implemented on reconfigurable logic with a scalable number of processing elements (PE). Implemented on a XCZU7EV FPGA running at 200 MHz and using four PEs, the accelerator infers with 87% top-5 accuracy and processes an image of 224x224 pixels in 220 ms. It consumes 7.35 W of power and uses less than 30% of the logic and arithmetic resources used by other MobileNet FPGA accelerators.
| Ord. | Autor | Género | Institución - País |
|---|---|---|---|
| 1 | Perez, Ignacio | Hombre |
Universidad de Concepción - Chile
|
| 2 | FIGUEROA-YEVENES, MAXIMILIANO | Hombre |
Universidad de Concepción - Chile
|
| Fuente |
|---|
| FONDECYT |
| Fondo Nacional de Desarrollo Científico y Tecnológico |
| ANID |
| National Agency for Research and Development (ANID) |
| National Agency for Research and Development |
| Agradecimiento |
|---|
| This research was funded by the National Agency for Research and Development (ANID) through graduate scholarship folio 22180733 and FONDECYT Regular Grant No. 1180995. |
| Funding: This research was funded by the National Agency for Research and Development (ANID) through graduate scholarship folio 22180733 and FONDECYT Regular Grant No. 1180995. |