Robust Human Motion Prediction via Integration of Spatial and Temporal Cues
DOI:
Author:
Affiliation:

1.College of Computer Science and Technology, Zhejiang University of Technology;2.College of Science, Zhejiang University of Technology

Clc Number:

Fund Project:

the National Key R & D Program of China (No.2018YFB1305200);the Natural Science Foundation of Zhejiang Province (LGG21F030011)

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Research on human motion prediction has made significant progress due to its importance in the development of various artificial intelligence applications. However, the prediction procedure often suffers from undesirable discontinuities and long-term error accumulation, which strongly limits its accuracy. To address these issues, a robust human motion prediction method via integration of spatial and temporal cues (RISTC) has been proposed. This method captures sufficient spatio-temporal correlation of the observable sequence of human poses by utilizing the spatio-temporal mixed feature extractor(MFE). In multi-layer MFEs, the channel-graph united attention blocks extract the augmented spatial features of the human poses in the channel and spatial dimension. Additionally, multi-scale temporal blocks have been designed to effectively capture complicated and highly dynamic temporal information. Our experiments on the Human3.6M and CMU Mocap datasets show that the proposed network yields higher prediction accuracy than the state-of-the-art methods.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:May 14,2024
  • Revised:July 09,2024
  • Adopted:August 05,2024
  • Online:
  • Published: