- [ICLR 2025 Submission] Realistic-Gesture: Co-Speech Gesture Video Generation through Semantic-aware Gesture Representation. [paper]
Anonymous authors - [ICLR 2025 Submission] Co3Gesture: Towards Coherent Concurrent Co-speech 3D Gesture Generation with Interactive Diffusion. [paper]
Anonymous authors - [ICLR 2025 Submission] TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation. [paper]
Anonymous authors - [ECCV 2024] Co-speech Gesture Video Generation with 3D Human Meshes Gestures. [paper]
Aniruddha Mahapatra, Richa Mishra, Ziyi Chen, Boyang Ding, Renda Li, Shoulei Wang, Jun-Yan Zhu, Peng Chang, Mei Han, Jing Xiao - [ACL 2024] LLM Knows Body Language, Too: Translating Speech Voices into Human Gestures. [paper]
Chenghao Xu, Guangtao Lyu, Jiexi Yan, Muli Yang, Cheng Deng - [CVPR 2024] EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling.[paper]
Haiyang Liu, Zihao Zhu, Giorgio Becherini, YICHEN PENG, Mingyang Su, YOU ZHOU, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, Michael J. Black - [CVPR 2024] Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation.[paper]
Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, Yike Guo - [CVPR 2024] Towards Variable and Coordinated Holistic Co-Speech Motion Generation.
Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, Changxing Ding - [CVPR 2024] ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis.
Muhammad Hamza Mughal, Rishabh Dabral, Ikhsanul Habibie, Lucia Donatelli, Marc Habermann, Christian Theobalt - [CVPR 2024] Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model.
Xu He, Qiaochu Huang, Zhensong Zhang, Zhiwei Lin, Zhiyong Wu, Sicheng Yang, Minglei Li, Zhiyi Chen, Songcen Xu, Xiaofei Wu - [ICCV 2023] Continual Learning for Personalized Co-speech Gesture Generation.[paper]
Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency - [ICCV 2023] LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation.[paper]
Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, Shenghua Gao - [CVPR 2023] QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation.[paper]
SiCheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Haolin Zhuang - [CVPR 2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation.[paper]
Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, Lequan Yu - [CVPR 2023] Diverse 3D Hand Gesture Prediction from Body Dynamics by Bilateral Hand Disentanglement.[paper]
Xingqun Qi, Chen Liu, Muyi Sun, Lincheng Li, Changjie Fan, Xin Yu - [CVPR 2023] Co-speech Gesture Synthesis by Reinforcement Learning with Contrastive Pre-trained Rewards.[paper]
Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu, Jianye HAO - [ICCV 2023] Continual Learning for Personalized Co-speech Gesture Generation.[paper]
Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency - [ICCV 2023] LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation.[paper]
Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, Shenghua Gao - [CVPR 2023] QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation.[paper]
SiCheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Haolin Zhuang - [CVPR 2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation.[paper]
Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, Lequan Yu - [CVPR 2023] Diverse 3D Hand Gesture Prediction from Body Dynamics by Bilateral Hand Disentanglement.[paper]
Xingqun Qi, Chen Liu, Muyi Sun, Lincheng Li, Changjie Fan, Xin Yu - [CVPR 2023] Co-speech Gesture Synthesis by Reinforcement Learning with Contrastive Pre-trained Rewards.[paper]
Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu, Jianye HAO - [ECCV 2022] BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis.[paper]
Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, Bo Zheng - [ECCV 2022] Audio-Driven Stylized Gesture Generation with Flow-Based Model.[paper]
Sheng Ye, Yu-Hui Wen, Yanan Sun, Ying He, Ziyang Zhang, Yaoyuan Wang, Weihua He, Yong-Jin Liu - [CVPR 2022] Audio-Driven Neural Gesture Reenactment With Video Motion Graphs.[paper]
Yang Zhou; Jimei Yang; Dingzeyu Li; Jun Saito; Deepali Aneja; Evangelos Kalogerakis - [CVPR 2022] Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation.[paper]
Xian Liu; Qianyi Wu; Hang Zhou; Yinghao Xu; Rui Qian; Xinyi Lin; Xiaowei Zhou; Wayne Wu; Bo Dai; Bolei Zhou - [CVPR 2022] SEEG: Semantic Energized Co-Speech Gesture Generation.[paper]
Yuanzhi Liang; Qianyu Feng; Linchao Zhu; Li Hu; Pan Pan; Yi Yang - [CVPR 2022] Low-Resource Adaptation for Personalized Co-Speech Gesture Generation.[paper]
Chaitanya Ahuja; Dong Won Lee; Louis-Philippe Morency - [ICCV 2021] Speech Drives Templates: Co-Speech Gesture Synthesis With Learned Templates.[paper]
Shenhan Qian; Zhi Tu; Yihao Zhi; Wen Liu; Shenghua Gao - [ICCV 2021] Audio2Gestures: Generating Diverse Gestures From Speech Audio With Conditional Variational Autoencoders.[paper]
Jing Li; Di Kang; Wenjie Pei; Xuefei Zhe; Ying Zhang; Zhenyu He; Linchao Bao - [ACM MM 2021] Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning.[paper]
Uttaran Bhattacharya, Elizabeth Childs, Nicholas S Rewkowski, Dinesh Manocha - [CVPR 2021] iMiGUE: An Identity-Free Video Dataset for Micro-Gesture Understanding and Emotion Analysis.[paper]
Xin Liu, Henglin Shi, Haoyu Chen, Zitong Yu, Xiaobai Li, Guoying Zhao - [CVPR 2021] Body2Hands: Learning To Infer 3D Hands From Conversational Gesture Body Dynamics. [paper]
Evonne Ng, Shiry Ginosar, Trevor Darrell, Hanbyul Joo - [ECCV 2020] Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach. [paper]
Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, and Louis-Philippe Morency - [ICCV 2019] Talking With Hands 16.2M: A Large-Scale Dataset of Synchronized Body-Finger Motion and Audio for Conversational Motion Analysis and Synthesis. [paper]
Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S. Srinivasa, Yaser Sheikh - [CVPR 2019] Learning Individual Styles of Conversational Gesture. [paper]
Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, Jitendra Malik