ImaGGen: Zero-Shot Generation of Co-Speech Semantic Gestures Grounded in Language and Image Input

Header Image

Our work combines an image feature analysis pipeline with a semantic matching module and a realization engine to map natural and semantically rich gestures onto generated beat gestures.

Abstract

Human communication combines speech with expressive nonverbal cues such as hand gestures that serve manifold communicative functions. Yet, current generative AI-based gesture generation approaches are, for the most part, restricted to simple, repetitive beat gestures that accompany the rhythm of speaking but do not contribute to communicating semantic meaning.

This paper tackles a core challenge in co-speech gesture synthesis: generating iconic or deictic gestures that are semantically coherent with a verbal utterance. Our basic assumption is that such gestures cannot be derived from language input alone, which inherently lacks the visual meaning that is often carried autonomously by gestures. We thus introduce a zero-shot system that generates gestures from a given language input and additionally is informed by imagistic input, without manual annotation or human intervention. Our method integrates an image analysis pipeline that extracts key object properties such as shape, symmetry, and alignment, together with a semantic matching module that links these visual details to spoken text. An inverse kinematics engine then synthesizes iconic and deictic gestures and combines them with co-generated natural beat gestures for coherent multimodal communication. A comprehensive user study demonstrates the effectiveness of our approach. In scenarios where speech alone was ambiguous, gestures generated by our system significantly improved participants’ ability to identify object properties, confirming their interpretability and communicative value. While challenges remain in representing complex shapes, our results highlight the importance of context-aware semantic gestures for creating expressive and collaborative virtual agents or avatars, marking a substantial step forward towards efficient and robust, embodied human-agent interaction.

Fountain demo image

The performed gestures are mapped on top of the beat gestures and automatically performed, based on the extracted reference image information.

Alignment demo image

Our work can automatically extract and incorporate different positional information, such as the extracted alignment in the video, into the semantic gesture process.

Project Code

The code will be released upon paper acceptance.