Muestra la distribución de disciplinas para esta publicación.
Publicaciones WoS (Ediciones: ISSHP, ISTP, AHCI, SSCI, SCI), Scopus, SciELO Chile.
| Indexado |
|
||
| DOI | |||
| Año | 2015 | ||
| Tipo |
Citas Totales
Autores Afiliación Chile
Instituciones Chile
% Participación
Internacional
Autores
Afiliación Extranjera
Instituciones
Extranjeras
Facial expressions and speech are elements that provide emotional information about the user through multiple communication channels. In this paper, a novel multimodal emotion recognition system based on visual and auditory information processing is proposed. The proposed approach is used in real affective human robot communication in order to estimate five different emotional states (i.e., happiness, anger, fear, sadness and neutral), and it consists of two subsystems with similar structure. The first subsystem achieves a robust facial feature extraction based on consecutively applied filters to the edge image and the use of a Dynamic Bayessian Classifier. A similar classifier is used in the second subsystem, where the input is associated to a set of speech descriptors, such as speech-rate, energy and pitch. Both subsystems are finally combined in real time. The results of this multimodal approach show the robustness and accuracy of the methodology respect to single emotion recognition systems.
| Ord. | Autor | Género | Institución - País |
|---|---|---|---|
| 1 | Cid, Felipe | Hombre |
Universidad Austral de Chile - Chile
|
| 2 | Manso, Luis J. | Hombre |
Universidad de Extremadura - España
|
| 3 | Núñez, Pedro | Hombre |
Universidad de Extremadura - España
|