Advanced articulated motion prediction

Motion synthesis using machine learning has seen rapid advancements inrecent years. Unlike traditional animation methods, utilizing deep learning togenerate human movement offers the unique advantage of producing slightvariations between motions, similar to the natural variability observed in realexamples. While several motion synthesis methods have achieved remarkablesuccess in generating highly varied and probabilistic animations, controlling thesynthesized animation in real-time while retaining stochastic elements remainsa serious challenge. The main purpose of this work is to develop a ConditionalGenerative Adversarial Network to generate real-time controlled motion thatbalances realism and stochastic variability. To achieve this, three novel GenerativeAdversarial models were developed. The models differ in the architecture oftheir generators that utilize: a Mixture-of-Experts method, a Latent-ModulatedNoise Injection technique, and a Transformer-based architecture respectively.We consider the latter to be the main contribution of this work, and we evaluateour method by comparing it to the other models on both stylized locomotiondata and complex, aperiodic dance sequences, assessing its ability to generatediverse, realistic motions, being able to mix between different styles whileresponding to motion control. Our findings highlight the trade-offs betweenmotion quality, variety and motion generalization in real-time synthesis bycomparing by exploring the advantages and disadvantages of each architecture,contributing to the ongoing development of more flexible and varied animationtechniques.

Belessis, A., Loi, I., & Moustakas, K. (2025). Advanced articulated motion prediction. Frontiers in Computer Science, 7, 1549693, 10.3389/fcomp.2025.1549693.