Not Like Transformers: Drop the Beat Representation for Dance Generation with Mamba-Based Diffusion Model

1Artificial Intelligence Graduate School, UNIST, 2Department of Computer Science, DGIST
WACV 2026

MambaDance generates 3D dance given in-the-wild music 'TBD' by TBD.

Abstract

Dance is a form of human motion characterized by emotional expression and communication, playing a role in various fields such as music, virtual reality, and content creation. Existing methods for dance generation often fail to adequately capture the inherently sequential, rhythmical, and music-synchronized characteristics of dance. In this paper, we propose a new dance generation approach that leverages a Mamba-based diffusion model. Mamba, specialized for handling long and autoregressive sequences, is integrated into our diffusion model as an alternative to the off-the-shelf Transformer. Additionally, considering the critical role of musical beats in dance choreography, we propose a Gaussian-based beat representation to explicitly guide the decoding of dance sequences. Experiments on AIST++ dataset show that our proposed method effectively reflects essential dance characteristics and advances performance compared to the state-of-the-art methods.

Method

Overall architecture of MambaDance. We extract music feature $m$, and a novel beat representation $b$ from the binary mask of beat of the feature (blue box). Two-stage diffusion architecture makes our approach enable length-agnostic generation in a single inference (green box). Decoder of the diffusion consists of the proposed Mamba-based modules, e.g., Single-Modal Mamba (SMM), Cross-Modal Mamba (CMM), and Adaptive Linear Modulation (ADaLM) (gray box).

Comparisons on FineDance Dataset

Qualitative comparison of MambaDance against state-of-the-art methods on FineDance dataset. Please unmute the video to evaluate the dance generation synchronized to the music beats.

Comparisons on AIST++ Dataset

Qualitative comparison of MambaDance against state-of-the-art methods on AIST++ dataset. Please unmute the video to evaluate the dance generation synchronized to the music beats.

BibTeX (TBD)


    @article{park2026mambadance,
      title={Not Like Transformers: Drop the Beat Representation for Dance Generation with Mamba-Based Diffusion Model},
      author={Sangjune Park and Inhyeok Choi and Donghyeon Soon and Youngwoo Jeon and Kyungdon Joo},
      journal={arXiv preprint arXiv:},
      year={2026}
    }