Muestra métricas de impacto externas asociadas a la publicación. Para mayor detalle:
| Indexado |
|
||
| DOI | 10.1109/ICCVW60793.2023.00456 | ||
| Año | 2023 | ||
| Tipo | proceedings paper |
Citas Totales
Autores Afiliación Chile
Instituciones Chile
% Participación
Internacional
Autores
Afiliación Extranjera
Instituciones
Extranjeras
Mixed reality applications require tracking the user's full-body motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading to a limited reconstruction of full-body motion due to variability in lower body configurations. We propose BoDiffusion - a generative diffusion model for motion synthesis to tackle this under-constrained reconstruction problem. We present a time and space conditioning scheme that allows BoDiffusion to leverage sparse tracking inputs while generating smooth and realistic full-body motion sequences. To the best of our knowledge, this is the first approach that uses the reverse diffusion process to model full-body tracking as a conditional sequence generation task. We conduct experiments on the large-scale motion-capture dataset AMASS and show that our approach outperforms the state-of-the-art approaches by a significant margin in terms of full-body motion realism and joint reconstruction error.
| Ord. | Autor | Género | Institución - País |
|---|---|---|---|
| 1 | Castillo, Angela | - |
Universidad de Los Andes, Chile - Chile
|
| 2 | Escobar, Maria | - |
Universidad de Los Andes, Chile - Chile
|
| 3 | Jeanneret, Guillaume | - |
Univ Caen Normandie - Francia
|
| 4 | Pumarola, Albert | - |
Meta AI - Estados Unidos
|
| 5 | Arbelaez, Pablo | - |
Universidad de Los Andes, Chile - Chile
|
| 6 | Thabet, Ali | - |
Meta AI - Estados Unidos
|
| 7 | Sanakoyeu, Artsiom | - |
Meta AI - Estados Unidos
|
| 8 | IEEE | Corporación |