Recent advancements in customized video generation have led to significant improvements in the simultaneous adaptation of appearance and motion. While prior methods typically decouple appearance and motion training, the stage-wise strategy often introduces concept interference, resulting in inaccurate rendering of appearance features or motion patterns. Another challenge is appearance contamination, where background and foreground elements from reference videos distort the customized subject. In this work, we propose JointTuner, a novel framework that enables joint optimization of both appearance and motion components by leveraging two key innovations: Synaptic Low-Rank Adaptation (Synaptic LoRA) and Appearance-independent Temporal Loss (AiT Loss). Synaptic LoRA introduces a synaptic regulator, implemented as a context-aware linear activation layer, to dynamically guide LoRA modules to focus on either subject appearance or motion patterns, thereby enabling consistent optimization across spatial and temporal dimensions. AiT Loss disrupts the gradient flow of appearance-related components, guiding the model to focus exclusively on motion learning and minimizing appearance interference. JointTuner is compatible with both UNet-based models (e.g., ZeroScope) and Diffusion Transformer-based models (e.g., CogVideoX), supporting the generation of longer and higher-quality customized videos. Additionally, we present a systematic evaluation framework for appearance-motion combined customization, covering 90 combinations evaluated along four critical dimensions: semantic alignment, motion dynamism, temporal consistency, and perceptual quality.
The architecture of JointTuner, an adaptive joint training framework with two main steps: (1) Inserting Synaptic LoRA into Transformers for efficient fine-tuning; (2) Optimizing Synaptic LoRA with two losses: the original diffusion loss maintains appearance details, while the AiT Loss focuses on motion patterns. The pre-trained text-to-video model is frozen, with only Synaptic LoRA parameters fine-tuned. During inference, the trained weights are used to generate customized videos.
@article{chen2025jointtuner,
title={JointTuner: Appearance-Motion Adaptive Joint Training for Customized Video Generation},
author={Fangda Chen and Shanshan Zhao and Chuanfu Xu and Long Lan},
journal={arXiv preprint arXiv:2503.23951},
year={2025}
}