Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:
| Indexado |
|
||
| DOI | 10.1109/GLOBECOM46510.2021.9685855 | ||
| Año | 2021 | ||
| Tipo | proceedings paper |
Citas Totales
Autores Afiliación Chile
Instituciones Chile
% Participación
Internacional
Autores
Afiliación Extranjera
Instituciones
Extranjeras
In order to perform competitive privacy-guaranteed object detection, we propose an end-to-end model called Privacy-preserving Deep Transformation Self-attention (PPDTSA). This model ensures the privacy of the inference results. It has a low-complexity hierarchical structure with a relatively small number of hyper-parameters. Consistency of prediction is achieved through the encoding and decoding blocks of the self-attention mechanism which enables points of interest to be located. Focus loss is estimated based on foreground-background imbalance. The remaining dense blocks enable image details to be retained and the Region Of Interest to be expanded. At the same time, the objects detected in the image are protected through the privacy noise volume which is specified by the user. Experimental results demonstrate that PPDTSA achieves superior performance on the MOT20 dataset compared with three other state-of-the-art object detection models.
| Ord. | Autor | Género | Institución - País |
|---|---|---|---|
| 1 | Ma, Bo | - |
Auckland Univ Technol - Nueva Zelanda
|
| 2 | Wu, Jinsong | - |
GUET - China
Universidad de Chile - Chile |
| 3 | Lai, Edmund | Hombre |
Auckland Univ Technol - Nueva Zelanda
|
| 4 | Hu, Shuolin | - |
Northeastern Univ - Estados Unidos
|
| 5 | IEEE | Corporación |